Test Report: Docker_Linux_crio 21997

                    
                      4e6ec0ce1ba9ad510ab2048b3373e13c9f965153:2025-12-05:42642
                    
                

Test fail (48/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.24
44 TestAddons/parallel/Registry 13.43
45 TestAddons/parallel/RegistryCreds 0.41
46 TestAddons/parallel/Ingress 148.91
47 TestAddons/parallel/InspektorGadget 5.24
48 TestAddons/parallel/MetricsServer 6.3
50 TestAddons/parallel/CSI 49.02
51 TestAddons/parallel/Headlamp 2.48
52 TestAddons/parallel/CloudSpanner 5.24
53 TestAddons/parallel/LocalPath 9.07
54 TestAddons/parallel/NvidiaDevicePlugin 5.26
55 TestAddons/parallel/Yakd 6.24
56 TestAddons/parallel/AmdGpuDevicePlugin 6.23
106 TestFunctional/parallel/ServiceCmdConnect 602.69
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.9
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
162 TestFunctional/parallel/ServiceCmd/Format 0.52
163 TestFunctional/parallel/ServiceCmd/URL 0.52
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 602.75
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.04
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.16
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 4.35
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.31
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.2
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.38
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.54
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.52
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.52
294 TestJSONOutput/pause/Command 2.32
300 TestJSONOutput/unpause/Command 2.23
370 TestPause/serial/Pause 5.69
451 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.26
456 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.98
459 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.46
467 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.55
471 TestStartStop/group/old-k8s-version/serial/Pause 6.68
479 TestStartStop/group/no-preload/serial/Pause 6.15
482 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.19
488 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.83
492 TestStartStop/group/newest-cni/serial/Pause 5.25
496 TestStartStop/group/embed-certs/serial/Pause 5.01
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable volcano --alsologtostderr -v=1: exit status 11 (241.885072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:00.946824   25947 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:00.946964   25947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:00.946974   25947 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:00.946978   25947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:00.947220   25947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:00.947461   25947 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:00.947767   25947 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:00.947787   25947 addons.go:622] checking whether the cluster is paused
	I1205 06:07:00.947868   25947 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:00.947882   25947 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:00.948196   25947 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:00.965823   25947 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:00.965880   25947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:00.982502   25947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:01.078223   25947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:01.078285   25947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:01.105662   25947 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:01.105694   25947 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:01.105700   25947 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:01.105703   25947 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:01.105706   25947 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:01.105710   25947 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:01.105715   25947 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:01.105719   25947 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:01.105724   25947 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:01.105744   25947 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:01.105753   25947 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:01.105758   25947 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:01.105763   25947 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:01.105771   25947 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:01.105776   25947 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:01.105789   25947 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:01.105797   25947 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:01.105803   25947 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:01.105807   25947 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:01.105812   25947 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:01.105818   25947 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:01.105826   25947 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:01.105839   25947 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:01.105847   25947 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:01.105852   25947 cri.go:89] found id: ""
	I1205 06:07:01.105919   25947 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:01.119826   25947 out.go:203] 
	W1205 06:07:01.121082   25947 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:01.121099   25947 out.go:285] * 
	* 
	W1205 06:07:01.124024   25947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:01.125071   25947 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.021967ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002666851s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00288521s
addons_test.go:392: (dbg) Run:  kubectl --context addons-177895 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-177895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-177895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.994428531s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 ip
2025/12/05 06:07:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable registry --alsologtostderr -v=1: exit status 11 (232.235244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:22.196232   28233 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:22.196547   28233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:22.196558   28233 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:22.196562   28233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:22.196756   28233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:22.197112   28233 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:22.197452   28233 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:22.197472   28233 addons.go:622] checking whether the cluster is paused
	I1205 06:07:22.197562   28233 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:22.197577   28233 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:22.197914   28233 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:22.214988   28233 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:22.215030   28233 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:22.231524   28233 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:22.327314   28233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:22.327424   28233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:22.354349   28233 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:22.354370   28233 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:22.354376   28233 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:22.354381   28233 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:22.354384   28233 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:22.354389   28233 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:22.354393   28233 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:22.354398   28233 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:22.354403   28233 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:22.354410   28233 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:22.354415   28233 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:22.354420   28233 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:22.354426   28233 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:22.354434   28233 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:22.354439   28233 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:22.354451   28233 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:22.354459   28233 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:22.354466   28233 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:22.354470   28233 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:22.354474   28233 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:22.354479   28233 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:22.354486   28233 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:22.354491   28233 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:22.354498   28233 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:22.354505   28233 cri.go:89] found id: ""
	I1205 06:07:22.354548   28233 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:22.367043   28233 out.go:203] 
	W1205 06:07:22.368039   28233 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:22.368061   28233 out.go:285] * 
	* 
	W1205 06:07:22.371392   28233 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:22.372443   28233 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.43s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.891432ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-177895
addons_test.go:332: (dbg) Run:  kubectl --context addons-177895 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (257.816365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:27.831911   28666 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:27.832212   28666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:27.832222   28666 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:27.832227   28666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:27.832409   28666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:27.832648   28666 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:27.832947   28666 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:27.832964   28666 addons.go:622] checking whether the cluster is paused
	I1205 06:07:27.833054   28666 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:27.833072   28666 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:27.833532   28666 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:27.853944   28666 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:27.853999   28666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:27.873624   28666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:27.975677   28666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:27.975766   28666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:28.006163   28666 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:28.006186   28666 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:28.006190   28666 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:28.006193   28666 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:28.006196   28666 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:28.006201   28666 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:28.006206   28666 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:28.006211   28666 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:28.006216   28666 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:28.006226   28666 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:28.006231   28666 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:28.006236   28666 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:28.006245   28666 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:28.006250   28666 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:28.006257   28666 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:28.006265   28666 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:28.006272   28666 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:28.006278   28666 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:28.006281   28666 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:28.006284   28666 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:28.006286   28666 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:28.006289   28666 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:28.006292   28666 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:28.006294   28666 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:28.006303   28666 cri.go:89] found id: ""
	I1205 06:07:28.006363   28666 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:28.019251   28666 out.go:203] 
	W1205 06:07:28.020502   28666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:28.020519   28666 out.go:285] * 
	* 
	W1205 06:07:28.023459   28666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:28.024555   28666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-177895 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-177895 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-177895 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [84ca3301-2a3c-4a90-876c-32de9785e34c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [84ca3301-2a3c-4a90-876c-32de9785e34c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002745023s
I1205 06:07:19.162192   16314 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.753226353s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-177895 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-177895
helpers_test.go:243: (dbg) docker inspect addons-177895:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645",
	        "Created": "2025-12-05T06:05:20.814441685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:05:20.844462315Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/hosts",
	        "LogPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645-json.log",
	        "Name": "/addons-177895",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-177895:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-177895",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645",
	                "LowerDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-177895",
	                "Source": "/var/lib/docker/volumes/addons-177895/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-177895",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-177895",
	                "name.minikube.sigs.k8s.io": "addons-177895",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5ccf52cc4eea1f5162c934809d25e5eb4739fe77f52933ac0a60ea4a4d077b2c",
	            "SandboxKey": "/var/run/docker/netns/5ccf52cc4eea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-177895": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb9fdd45e8b65c8a9fe9be25b359f6f1c5cf5d1ed8bbc11638339eb81ec8d245",
	                    "EndpointID": "502c6638825bdf5815a6cd34a702ae716948c0c6e6ead2573795f4fa79e8b25d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "7a:ae:93:f6:5b:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-177895",
	                        "ed37239a37c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-177895 -n addons-177895
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-177895 logs -n 25: (1.061448158s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-565262 --alsologtostderr --binary-mirror http://127.0.0.1:40985 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-565262 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ -p binary-mirror-565262                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-565262 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ addons  │ disable dashboard -p addons-177895                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ addons  │ enable dashboard -p addons-177895                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ start   │ -p addons-177895 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:07 UTC │
	│ addons  │ addons-177895 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ enable headlamp -p addons-177895 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ ssh     │ addons-177895 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ ip      │ addons-177895 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │ 05 Dec 25 06:07 UTC │
	│ addons  │ addons-177895 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-177895                                                                                                                                                                                                                                                                                                                                                                                           │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │ 05 Dec 25 06:07 UTC │
	│ addons  │ addons-177895 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ ssh     │ addons-177895 ssh cat /opt/local-path-provisioner/pvc-981059f5-0a3f-45ab-b5d0-3cd374252d92_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │ 05 Dec 25 06:07 UTC │
	│ addons  │ addons-177895 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:08 UTC │                     │
	│ addons  │ addons-177895 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:08 UTC │                     │
	│ ip      │ addons-177895 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-177895        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:57.860254   18088 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:57.860361   18088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:57.860370   18088 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:57.860374   18088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:57.860560   18088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:04:57.861058   18088 out.go:368] Setting JSON to false
	I1205 06:04:57.861830   18088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2842,"bootTime":1764911856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:57.861875   18088 start.go:143] virtualization: kvm guest
	I1205 06:04:57.863389   18088 out.go:179] * [addons-177895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:04:57.864456   18088 notify.go:221] Checking for updates...
	I1205 06:04:57.864473   18088 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:04:57.865490   18088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:57.866497   18088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:04:57.867460   18088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:04:57.868466   18088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:04:57.869420   18088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:04:57.870585   18088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:57.891960   18088 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:04:57.892090   18088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:57.945541   18088 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:57.936959887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:57.945645   18088 docker.go:319] overlay module found
	I1205 06:04:57.947315   18088 out.go:179] * Using the docker driver based on user configuration
	I1205 06:04:57.948338   18088 start.go:309] selected driver: docker
	I1205 06:04:57.948351   18088 start.go:927] validating driver "docker" against <nil>
	I1205 06:04:57.948361   18088 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:04:57.948902   18088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:58.000191   18088 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:57.990740778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:58.000347   18088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:58.000554   18088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:04:58.002101   18088 out.go:179] * Using Docker driver with root privileges
	I1205 06:04:58.003167   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:04:58.003221   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:04:58.003231   18088 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:04:58.003279   18088 start.go:353] cluster config:
	{Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1205 06:04:58.004405   18088 out.go:179] * Starting "addons-177895" primary control-plane node in "addons-177895" cluster
	I1205 06:04:58.005347   18088 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:04:58.006397   18088 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:04:58.007347   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:04:58.007373   18088 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 06:04:58.007378   18088 cache.go:65] Caching tarball of preloaded images
	I1205 06:04:58.007436   18088 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:04:58.007450   18088 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 06:04:58.007458   18088 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:04:58.007810   18088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json ...
	I1205 06:04:58.007835   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json: {Name:mkfe13a4152566762b6d1f392180f8bb40fb4cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:04:58.022711   18088 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:04:58.022812   18088 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:04:58.022830   18088 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1205 06:04:58.022835   18088 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1205 06:04:58.022845   18088 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1205 06:04:58.022849   18088 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1205 06:05:09.857354   18088 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1205 06:05:09.857386   18088 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:05:09.857423   18088 start.go:360] acquireMachinesLock for addons-177895: {Name:mkcd2447083fe8b63b568f53de9d9a8d6faab33c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:09.857527   18088 start.go:364] duration metric: took 84.296µs to acquireMachinesLock for "addons-177895"
	I1205 06:05:09.857551   18088 start.go:93] Provisioning new machine with config: &{Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:09.857618   18088 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:05:09.859654   18088 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 06:05:09.859867   18088 start.go:159] libmachine.API.Create for "addons-177895" (driver="docker")
	I1205 06:05:09.859896   18088 client.go:173] LocalClient.Create starting
	I1205 06:05:09.859980   18088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 06:05:10.160427   18088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 06:05:10.275132   18088 cli_runner.go:164] Run: docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:05:10.292250   18088 cli_runner.go:211] docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:05:10.292319   18088 network_create.go:284] running [docker network inspect addons-177895] to gather additional debugging logs...
	I1205 06:05:10.292352   18088 cli_runner.go:164] Run: docker network inspect addons-177895
	W1205 06:05:10.306828   18088 cli_runner.go:211] docker network inspect addons-177895 returned with exit code 1
	I1205 06:05:10.306850   18088 network_create.go:287] error running [docker network inspect addons-177895]: docker network inspect addons-177895: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-177895 not found
	I1205 06:05:10.306860   18088 network_create.go:289] output of [docker network inspect addons-177895]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-177895 not found
	
	** /stderr **
	I1205 06:05:10.306954   18088 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:05:10.322090   18088 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e402b0}
	I1205 06:05:10.322131   18088 network_create.go:124] attempt to create docker network addons-177895 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:05:10.322167   18088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-177895 addons-177895
	I1205 06:05:10.365500   18088 network_create.go:108] docker network addons-177895 192.168.49.0/24 created
	I1205 06:05:10.365532   18088 kic.go:121] calculated static IP "192.168.49.2" for the "addons-177895" container
	I1205 06:05:10.365581   18088 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:05:10.380065   18088 cli_runner.go:164] Run: docker volume create addons-177895 --label name.minikube.sigs.k8s.io=addons-177895 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:05:10.396105   18088 oci.go:103] Successfully created a docker volume addons-177895
	I1205 06:05:10.396173   18088 cli_runner.go:164] Run: docker run --rm --name addons-177895-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --entrypoint /usr/bin/test -v addons-177895:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:05:17.028448   18088 cli_runner.go:217] Completed: docker run --rm --name addons-177895-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --entrypoint /usr/bin/test -v addons-177895:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (6.632240129s)
	I1205 06:05:17.028475   18088 oci.go:107] Successfully prepared a docker volume addons-177895
	I1205 06:05:17.028510   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:17.028519   18088 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 06:05:17.028569   18088 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-177895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 06:05:20.749950   18088 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-177895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.721343207s)
	I1205 06:05:20.749977   18088 kic.go:203] duration metric: took 3.721455547s to extract preloaded images to volume ...
	W1205 06:05:20.750058   18088 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 06:05:20.750087   18088 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 06:05:20.750120   18088 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:05:20.799903   18088 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-177895 --name addons-177895 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-177895 --network addons-177895 --ip 192.168.49.2 --volume addons-177895:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:05:21.080062   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Running}}
	I1205 06:05:21.099753   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.117471   18088 cli_runner.go:164] Run: docker exec addons-177895 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:05:21.166451   18088 oci.go:144] the created container "addons-177895" has a running status.
	I1205 06:05:21.166495   18088 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa...
	I1205 06:05:21.210424   18088 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:05:21.237703   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.254895   18088 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:05:21.254919   18088 kic_runner.go:114] Args: [docker exec --privileged addons-177895 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:05:21.294146   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.316310   18088 machine.go:94] provisionDockerMachine start ...
	I1205 06:05:21.316413   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:21.334757   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:21.335090   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:21.335113   18088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:05:21.336332   18088 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45030->127.0.0.1:32768: read: connection reset by peer
	I1205 06:05:24.471512   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-177895
	
	I1205 06:05:24.471541   18088 ubuntu.go:182] provisioning hostname "addons-177895"
	I1205 06:05:24.471609   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.488478   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:24.488690   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:24.488706   18088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-177895 && echo "addons-177895" | sudo tee /etc/hostname
	I1205 06:05:24.632219   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-177895
	
	I1205 06:05:24.632291   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.650316   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:24.650530   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:24.650546   18088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-177895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-177895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-177895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:05:24.784133   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:05:24.784158   18088 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 06:05:24.784175   18088 ubuntu.go:190] setting up certificates
	I1205 06:05:24.784184   18088 provision.go:84] configureAuth start
	I1205 06:05:24.784229   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:24.801499   18088 provision.go:143] copyHostCerts
	I1205 06:05:24.801558   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 06:05:24.801660   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 06:05:24.801724   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 06:05:24.801773   18088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.addons-177895 san=[127.0.0.1 192.168.49.2 addons-177895 localhost minikube]
	I1205 06:05:24.874981   18088 provision.go:177] copyRemoteCerts
	I1205 06:05:24.875025   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:05:24.875055   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.891364   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:24.987016   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:05:25.004372   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 06:05:25.019563   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:05:25.034636   18088 provision.go:87] duration metric: took 250.44034ms to configureAuth
	I1205 06:05:25.034657   18088 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:05:25.034817   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:25.034944   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.051721   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:25.051910   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:25.051926   18088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:05:25.315463   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:05:25.315494   18088 machine.go:97] duration metric: took 3.999151809s to provisionDockerMachine
	I1205 06:05:25.315508   18088 client.go:176] duration metric: took 15.455605099s to LocalClient.Create
	I1205 06:05:25.315530   18088 start.go:167] duration metric: took 15.455663009s to libmachine.API.Create "addons-177895"
	I1205 06:05:25.315539   18088 start.go:293] postStartSetup for "addons-177895" (driver="docker")
	I1205 06:05:25.315551   18088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:05:25.315620   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:05:25.315665   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.332225   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.429977   18088 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:05:25.433216   18088 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:05:25.433243   18088 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:05:25.433255   18088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 06:05:25.433307   18088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 06:05:25.433357   18088 start.go:296] duration metric: took 117.811078ms for postStartSetup
	I1205 06:05:25.433624   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:25.449946   18088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json ...
	I1205 06:05:25.450167   18088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:05:25.450216   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.465934   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.558475   18088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:05:25.562461   18088 start.go:128] duration metric: took 15.704830346s to createHost
	I1205 06:05:25.562481   18088 start.go:83] releasing machines lock for "addons-177895", held for 15.704941518s
	I1205 06:05:25.562541   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:25.578678   18088 ssh_runner.go:195] Run: cat /version.json
	I1205 06:05:25.578715   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.578819   18088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:05:25.578896   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.595764   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.597071   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.741239   18088 ssh_runner.go:195] Run: systemctl --version
	I1205 06:05:25.746669   18088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:05:25.777551   18088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:05:25.781554   18088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:05:25.781613   18088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:05:25.805621   18088 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 06:05:25.805637   18088 start.go:496] detecting cgroup driver to use...
	I1205 06:05:25.805665   18088 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 06:05:25.805710   18088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:05:25.819750   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:05:25.830428   18088 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:05:25.830472   18088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:05:25.844775   18088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:05:25.859984   18088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:05:25.932481   18088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:05:26.016627   18088 docker.go:234] disabling docker service ...
	I1205 06:05:26.016677   18088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:05:26.032727   18088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:05:26.043951   18088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:05:26.121080   18088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:05:26.197246   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:05:26.208144   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:05:26.220436   18088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:05:26.220491   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.229603   18088 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 06:05:26.229651   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.237354   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.244903   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.252275   18088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:05:26.259259   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.267011   18088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.278815   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.286437   18088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:05:26.292861   18088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 06:05:26.292900   18088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 06:05:26.303305   18088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:05:26.310441   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:26.384005   18088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:05:26.505267   18088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:05:26.505394   18088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:05:26.508997   18088 start.go:564] Will wait 60s for crictl version
	I1205 06:05:26.509056   18088 ssh_runner.go:195] Run: which crictl
	I1205 06:05:26.512166   18088 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:05:26.535096   18088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:05:26.535183   18088 ssh_runner.go:195] Run: crio --version
	I1205 06:05:26.560220   18088 ssh_runner.go:195] Run: crio --version
	I1205 06:05:26.586801   18088 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 06:05:26.587892   18088 cli_runner.go:164] Run: docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:05:26.603628   18088 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:05:26.607272   18088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:26.616591   18088 kubeadm.go:884] updating cluster {Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:05:26.616700   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:26.616750   18088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:26.644770   18088 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:05:26.644786   18088 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:05:26.644823   18088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:26.667237   18088 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:05:26.667256   18088 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:05:26.667266   18088 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1205 06:05:26.667407   18088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-177895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:05:26.667499   18088 ssh_runner.go:195] Run: crio config
	I1205 06:05:26.709743   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:05:26.709764   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:05:26.709785   18088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:05:26.709813   18088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-177895 NodeName:addons-177895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:05:26.709940   18088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-177895"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:05:26.710003   18088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:05:26.717312   18088 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:05:26.717375   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:05:26.724240   18088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 06:05:26.735707   18088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:05:26.749189   18088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1205 06:05:26.760170   18088 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:05:26.763242   18088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:26.771930   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:26.846421   18088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:05:26.868364   18088 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895 for IP: 192.168.49.2
	I1205 06:05:26.868381   18088 certs.go:195] generating shared ca certs ...
	I1205 06:05:26.868395   18088 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.868504   18088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 06:05:26.967363   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt ...
	I1205 06:05:26.967386   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt: {Name:mk9820ca0baeabc29c6b7a204a5424632bc7dee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.967531   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key ...
	I1205 06:05:26.967543   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key: {Name:mkf2cb335d8447035dc4c895cc3dcd92e8d7756b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.967619   18088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 06:05:27.028983   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt ...
	I1205 06:05:27.029003   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt: {Name:mka3d3c95e6815223c59da77efa96499ba48ea47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.029122   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key ...
	I1205 06:05:27.029133   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key: {Name:mk1e4251ab00860b186400fc98ca84639f085626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.029198   18088 certs.go:257] generating profile certs ...
	I1205 06:05:27.029256   18088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key
	I1205 06:05:27.029270   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt with IP's: []
	I1205 06:05:27.084580   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt ...
	I1205 06:05:27.084601   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: {Name:mk4b7ba93006a6a9f124b381c8f215dd5347c42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.084724   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key ...
	I1205 06:05:27.084735   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key: {Name:mk4b756913f387647876204f5b100305a846d3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.084891   18088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508
	I1205 06:05:27.084922   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:05:27.183428   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 ...
	I1205 06:05:27.183449   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508: {Name:mkbdaa894d6ad23fff1845806df83a0d503059db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.183578   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508 ...
	I1205 06:05:27.183590   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508: {Name:mk09cc5dfb0995bfecb789c6278dbdd8b84a5e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.183663   18088 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt
	I1205 06:05:27.183730   18088 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key
	I1205 06:05:27.183779   18088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key
	I1205 06:05:27.183797   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt with IP's: []
	I1205 06:05:27.227119   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt ...
	I1205 06:05:27.227139   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt: {Name:mk215009d063d438246d7d87ead78d88c93adaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.227249   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key ...
	I1205 06:05:27.227259   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key: {Name:mk2d1e44ecb1c261d3bf6a4d1a94d448b238247b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.227425   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:05:27.227458   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:05:27.227489   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:05:27.227512   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 06:05:27.228081   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:05:27.245136   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:05:27.260924   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:05:27.276455   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:05:27.291743   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 06:05:27.306969   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:05:27.322268   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:05:27.337589   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:05:27.352787   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:05:27.369858   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:05:27.380919   18088 ssh_runner.go:195] Run: openssl version
	I1205 06:05:27.386432   18088 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.392734   18088 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:05:27.401220   18088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.404504   18088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.404538   18088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.436912   18088 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:05:27.443512   18088 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:05:27.450121   18088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:05:27.453122   18088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:05:27.453162   18088 kubeadm.go:401] StartCluster: {Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:27.453241   18088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:05:27.453283   18088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:05:27.477967   18088 cri.go:89] found id: ""
	I1205 06:05:27.478014   18088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:05:27.485110   18088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:05:27.492049   18088 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:05:27.492087   18088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:05:27.498795   18088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:05:27.498811   18088 kubeadm.go:158] found existing configuration files:
	
	I1205 06:05:27.498840   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:05:27.505489   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:05:27.505523   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:05:27.512026   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:05:27.518616   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:05:27.518651   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:05:27.525128   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:05:27.531864   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:05:27.531905   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:05:27.538182   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:05:27.544796   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:05:27.544845   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:05:27.551225   18088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:05:27.616853   18088 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1205 06:05:27.670668   18088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:05:36.352110   18088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 06:05:36.352184   18088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:05:36.352278   18088 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:05:36.352398   18088 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 06:05:36.352470   18088 kubeadm.go:319] OS: Linux
	I1205 06:05:36.352533   18088 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:05:36.352614   18088 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:05:36.352683   18088 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:05:36.352759   18088 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:05:36.352835   18088 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:05:36.352886   18088 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:05:36.352955   18088 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:05:36.353033   18088 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 06:05:36.353126   18088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:05:36.353215   18088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:05:36.353307   18088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:05:36.353388   18088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:05:36.354936   18088 out.go:252]   - Generating certificates and keys ...
	I1205 06:05:36.355003   18088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:05:36.355055   18088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:05:36.355120   18088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:05:36.355170   18088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:05:36.355218   18088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:05:36.355278   18088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:05:36.355367   18088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:05:36.355551   18088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-177895 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:05:36.355600   18088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:05:36.355794   18088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-177895 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:05:36.355905   18088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:05:36.355984   18088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:05:36.356057   18088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:05:36.356134   18088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:05:36.356180   18088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:05:36.356237   18088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:05:36.356295   18088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:05:36.356394   18088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:05:36.356445   18088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:05:36.356544   18088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:05:36.356647   18088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:05:36.357876   18088 out.go:252]   - Booting up control plane ...
	I1205 06:05:36.357981   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:05:36.358070   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:05:36.358159   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:05:36.358285   18088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:05:36.358419   18088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:05:36.358596   18088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:05:36.358718   18088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:05:36.358773   18088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:05:36.358939   18088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:05:36.359101   18088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:05:36.359198   18088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.783887ms
	I1205 06:05:36.359316   18088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 06:05:36.359431   18088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1205 06:05:36.359553   18088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 06:05:36.359674   18088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 06:05:36.359797   18088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.408286193s
	I1205 06:05:36.359888   18088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.975191548s
	I1205 06:05:36.359970   18088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50221364s
	I1205 06:05:36.360108   18088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 06:05:36.360276   18088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 06:05:36.360372   18088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 06:05:36.360645   18088 kubeadm.go:319] [mark-control-plane] Marking the node addons-177895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 06:05:36.360722   18088 kubeadm.go:319] [bootstrap-token] Using token: 77ksux.rxi4lc4qkr43phxu
	I1205 06:05:36.362707   18088 out.go:252]   - Configuring RBAC rules ...
	I1205 06:05:36.362837   18088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 06:05:36.362947   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 06:05:36.363140   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 06:05:36.363293   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 06:05:36.363449   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 06:05:36.363528   18088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 06:05:36.363681   18088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 06:05:36.363747   18088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 06:05:36.363798   18088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 06:05:36.363805   18088 kubeadm.go:319] 
	I1205 06:05:36.363865   18088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 06:05:36.363877   18088 kubeadm.go:319] 
	I1205 06:05:36.363935   18088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 06:05:36.363941   18088 kubeadm.go:319] 
	I1205 06:05:36.363961   18088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 06:05:36.364010   18088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 06:05:36.364058   18088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 06:05:36.364063   18088 kubeadm.go:319] 
	I1205 06:05:36.364107   18088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 06:05:36.364114   18088 kubeadm.go:319] 
	I1205 06:05:36.364153   18088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 06:05:36.364159   18088 kubeadm.go:319] 
	I1205 06:05:36.364202   18088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 06:05:36.364266   18088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 06:05:36.364349   18088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 06:05:36.364355   18088 kubeadm.go:319] 
	I1205 06:05:36.364428   18088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 06:05:36.364494   18088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 06:05:36.364501   18088 kubeadm.go:319] 
	I1205 06:05:36.364570   18088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 77ksux.rxi4lc4qkr43phxu \
	I1205 06:05:36.364654   18088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d \
	I1205 06:05:36.364678   18088 kubeadm.go:319] 	--control-plane 
	I1205 06:05:36.364687   18088 kubeadm.go:319] 
	I1205 06:05:36.364757   18088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 06:05:36.364764   18088 kubeadm.go:319] 
	I1205 06:05:36.364844   18088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 77ksux.rxi4lc4qkr43phxu \
	I1205 06:05:36.364952   18088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d 
	I1205 06:05:36.364964   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:05:36.364969   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:05:36.366336   18088 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 06:05:36.367409   18088 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 06:05:36.371405   18088 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 06:05:36.371420   18088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 06:05:36.383261   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 06:05:36.571084   18088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:05:36.571158   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:36.571154   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-177895 minikube.k8s.io/updated_at=2025_12_05T06_05_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=addons-177895 minikube.k8s.io/primary=true
	I1205 06:05:36.636652   18088 ops.go:34] apiserver oom_adj: -16
	I1205 06:05:36.636757   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:37.137503   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:37.637870   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:38.136964   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:38.637611   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:39.136895   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:39.636940   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:40.137490   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:40.637565   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:41.137823   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:41.198752   18088 kubeadm.go:1114] duration metric: took 4.627657727s to wait for elevateKubeSystemPrivileges
	I1205 06:05:41.198786   18088 kubeadm.go:403] duration metric: took 13.745626755s to StartCluster
	I1205 06:05:41.198809   18088 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:41.198922   18088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:05:41.199284   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:41.199507   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 06:05:41.199530   18088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:41.199596   18088 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 06:05:41.199701   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:41.199727   18088 addons.go:70] Setting yakd=true in profile "addons-177895"
	I1205 06:05:41.199737   18088 addons.go:70] Setting inspektor-gadget=true in profile "addons-177895"
	I1205 06:05:41.199753   18088 addons.go:239] Setting addon inspektor-gadget=true in "addons-177895"
	I1205 06:05:41.199761   18088 addons.go:70] Setting volcano=true in profile "addons-177895"
	I1205 06:05:41.199766   18088 addons.go:70] Setting registry-creds=true in profile "addons-177895"
	I1205 06:05:41.199776   18088 addons.go:70] Setting volumesnapshots=true in profile "addons-177895"
	I1205 06:05:41.199782   18088 addons.go:239] Setting addon registry-creds=true in "addons-177895"
	I1205 06:05:41.199784   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199787   18088 addons.go:239] Setting addon volumesnapshots=true in "addons-177895"
	I1205 06:05:41.199776   18088 addons.go:70] Setting default-storageclass=true in profile "addons-177895"
	I1205 06:05:41.199806   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199819   18088 addons.go:70] Setting metrics-server=true in profile "addons-177895"
	I1205 06:05:41.199814   18088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-177895"
	I1205 06:05:41.199839   18088 addons.go:70] Setting cloud-spanner=true in profile "addons-177895"
	I1205 06:05:41.199826   18088 addons.go:70] Setting ingress=true in profile "addons-177895"
	I1205 06:05:41.199856   18088 addons.go:70] Setting registry=true in profile "addons-177895"
	I1205 06:05:41.199856   18088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-177895"
	I1205 06:05:41.199862   18088 addons.go:70] Setting gcp-auth=true in profile "addons-177895"
	I1205 06:05:41.199868   18088 addons.go:239] Setting addon registry=true in "addons-177895"
	I1205 06:05:41.199879   18088 addons.go:239] Setting addon ingress=true in "addons-177895"
	I1205 06:05:41.199883   18088 mustload.go:66] Loading cluster: addons-177895
	I1205 06:05:41.199887   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199938   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199956   18088 addons.go:70] Setting storage-provisioner=true in profile "addons-177895"
	I1205 06:05:41.199975   18088 addons.go:239] Setting addon storage-provisioner=true in "addons-177895"
	I1205 06:05:41.200001   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200126   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:41.200276   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200303   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200316   18088 addons.go:70] Setting ingress-dns=true in profile "addons-177895"
	I1205 06:05:41.200345   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200346   18088 addons.go:239] Setting addon ingress-dns=true in "addons-177895"
	I1205 06:05:41.200382   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200418   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200463   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200474   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199753   18088 addons.go:239] Setting addon yakd=true in "addons-177895"
	I1205 06:05:41.200742   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.201189   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199770   18088 addons.go:239] Setting addon volcano=true in "addons-177895"
	I1205 06:05:41.201308   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199848   18088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-177895"
	I1205 06:05:41.201384   18088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-177895"
	I1205 06:05:41.201437   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199827   18088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-177895"
	I1205 06:05:41.199855   18088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-177895"
	I1205 06:05:41.201505   18088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-177895"
	I1205 06:05:41.201555   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199837   18088 addons.go:239] Setting addon metrics-server=true in "addons-177895"
	I1205 06:05:41.201601   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200303   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.201850   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202054   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202083   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202127   18088 out.go:179] * Verifying Kubernetes components...
	I1205 06:05:41.199816   18088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-177895"
	I1205 06:05:41.199851   18088 addons.go:239] Setting addon cloud-spanner=true in "addons-177895"
	I1205 06:05:41.202918   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.204054   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199808   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.206940   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.207203   18088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-177895"
	I1205 06:05:41.207238   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.207417   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:41.207801   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.209807   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.210709   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.211500   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.265310   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.268534   18088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1205 06:05:41.268727   18088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1205 06:05:41.268752   18088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1205 06:05:41.270065   18088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:41.270412   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1205 06:05:41.270465   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.272446   18088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-177895"
	I1205 06:05:41.272546   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.273023   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.273098   18088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1205 06:05:41.273285   18088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:41.274911   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 06:05:41.274984   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.275803   18088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 06:05:41.275831   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 06:05:41.275875   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.281545   18088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:05:41.281608   18088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 06:05:41.283135   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 06:05:41.283154   18088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 06:05:41.283206   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.283451   18088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:41.283488   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:05:41.283542   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.295803   18088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1205 06:05:41.295803   18088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1205 06:05:41.295905   18088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1205 06:05:41.297060   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 06:05:41.297080   18088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 06:05:41.297155   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.297787   18088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:41.297800   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1205 06:05:41.297844   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.298087   18088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:41.298100   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1205 06:05:41.298140   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.296278   18088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 06:05:41.304109   18088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:41.304126   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 06:05:41.304137   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1205 06:05:41.304173   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.306482   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:41.307636   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 06:05:41.307731   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	W1205 06:05:41.311793   18088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 06:05:41.312474   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 06:05:41.312491   18088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 06:05:41.312552   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.312888   18088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:41.312903   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 06:05:41.312946   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.321225   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 06:05:41.324790   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 06:05:41.326862   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 06:05:41.328055   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 06:05:41.329315   18088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1205 06:05:41.330423   18088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:41.330437   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 06:05:41.330498   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.331264   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 06:05:41.332549   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 06:05:41.333835   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 06:05:41.335203   18088 addons.go:239] Setting addon default-storageclass=true in "addons-177895"
	I1205 06:05:41.338899   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.339395   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.340281   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 06:05:41.341345   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 06:05:41.341405   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 06:05:41.342120   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.345308   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.348804   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.348930   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.354562   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.355759   18088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 06:05:41.356878   18088 out.go:179]   - Using image docker.io/busybox:stable
	I1205 06:05:41.357801   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.358020   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 06:05:41.359340   18088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:41.361186   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 06:05:41.361273   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.361343   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.370433   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.383595   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.386697   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.393374   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.395304   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.402104   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.404792   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	W1205 06:05:41.406534   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.406563   18088 retry.go:31] will retry after 345.435491ms: ssh: handshake failed: EOF
	I1205 06:05:41.415651   18088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:41.415675   18088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:05:41.415735   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.426078   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.426605   18088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1205 06:05:41.426875   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.426899   18088 retry.go:31] will retry after 208.716502ms: ssh: handshake failed: EOF
	I1205 06:05:41.448505   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	W1205 06:05:41.449539   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.449561   18088 retry.go:31] will retry after 356.939619ms: ssh: handshake failed: EOF
	I1205 06:05:41.532949   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:41.535565   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:41.537623   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:41.554153   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:41.570535   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:41.576427   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 06:05:41.576453   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 06:05:41.578156   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 06:05:41.578193   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 06:05:41.578258   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 06:05:41.578278   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 06:05:41.582874   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:41.586579   18088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 06:05:41.586598   18088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 06:05:41.587519   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 06:05:41.587550   18088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 06:05:41.607733   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 06:05:41.607762   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 06:05:41.614774   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 06:05:41.614800   18088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 06:05:41.633789   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 06:05:41.633817   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 06:05:41.634155   18088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:41.634177   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 06:05:41.643620   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:41.654815   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 06:05:41.654847   18088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 06:05:41.659442   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:41.659479   18088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 06:05:41.660489   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 06:05:41.660510   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 06:05:41.687458   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 06:05:41.687485   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 06:05:41.689180   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:41.702281   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 06:05:41.702329   18088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 06:05:41.702995   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 06:05:41.703012   18088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 06:05:41.728808   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 06:05:41.728862   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 06:05:41.743191   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:41.750830   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:41.750857   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 06:05:41.761553   18088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:41.761588   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 06:05:41.767450   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 06:05:41.767473   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 06:05:41.775765   18088 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 06:05:41.778655   18088 node_ready.go:35] waiting up to 6m0s for node "addons-177895" to be "Ready" ...
	I1205 06:05:41.806109   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:41.811921   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:41.829341   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 06:05:41.829365   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 06:05:41.860872   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:41.886479   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 06:05:41.886503   18088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 06:05:41.929309   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 06:05:41.929345   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 06:05:41.964232   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 06:05:41.964256   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 06:05:42.020916   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:42.021008   18088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 06:05:42.026954   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:42.029563   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:42.060630   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:42.285880   18088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-177895" context rescaled to 1 replicas
	I1205 06:05:42.716541   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.183551814s)
	I1205 06:05:42.716587   18088 addons.go:495] Verifying addon ingress=true in "addons-177895"
	I1205 06:05:42.716618   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.181004601s)
	I1205 06:05:42.716725   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.162548412s)
	I1205 06:05:42.716800   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.146211823s)
	I1205 06:05:42.716830   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.133936s)
	I1205 06:05:42.716667   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.179026456s)
	I1205 06:05:42.716920   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.073272676s)
	I1205 06:05:42.716973   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.027763843s)
	I1205 06:05:42.716988   18088 addons.go:495] Verifying addon registry=true in "addons-177895"
	I1205 06:05:42.717097   18088 addons.go:495] Verifying addon metrics-server=true in "addons-177895"
	I1205 06:05:42.718077   18088 out.go:179] * Verifying registry addon...
	I1205 06:05:42.718089   18088 out.go:179] * Verifying ingress addon...
	I1205 06:05:42.720564   18088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-177895 service yakd-dashboard -n yakd-dashboard
	
	I1205 06:05:42.722058   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 06:05:42.722059   18088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 06:05:42.724875   18088 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:05:42.724942   18088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:05:42.724957   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.181137   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.369170367s)
	W1205 06:05:43.181195   18088 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:05:43.181221   18088 retry.go:31] will retry after 181.285683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:05:43.181288   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.320341455s)
	I1205 06:05:43.181515   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.154543642s)
	I1205 06:05:43.181629   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.152041061s)
	I1205 06:05:43.181828   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.121160666s)
	I1205 06:05:43.181846   18088 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-177895"
	I1205 06:05:43.183267   18088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1205 06:05:43.185794   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 06:05:43.188377   18088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:05:43.188395   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1205 06:05:43.188398   18088 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1205 06:05:43.288680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.288799   18088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:05:43.288817   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:43.362924   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:43.688581   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:43.724448   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.724756   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:43.781372   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:44.189444   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:44.224243   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:44.224421   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:44.688228   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:44.724134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:44.724370   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.189134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:45.225406   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:45.225459   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.688357   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:45.724746   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:45.724810   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.785970   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423011549s)
	I1205 06:05:46.188837   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:46.224836   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:46.225056   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:46.280404   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:46.687984   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:46.724781   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:46.724852   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:47.188631   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:47.224629   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:47.224720   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:47.689075   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:47.726198   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:47.726426   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:48.188589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:48.224548   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:48.224550   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:48.280776   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:48.688180   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:48.723766   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:48.724032   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:48.878250   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 06:05:48.878304   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:48.896081   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:49.007062   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 06:05:49.018567   18088 addons.go:239] Setting addon gcp-auth=true in "addons-177895"
	I1205 06:05:49.018638   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:49.018965   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:49.035660   18088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 06:05:49.035708   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:49.052289   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:49.146036   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:49.147039   18088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 06:05:49.148019   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 06:05:49.148033   18088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 06:05:49.159647   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 06:05:49.159665   18088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 06:05:49.171016   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:05:49.171036   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 06:05:49.182341   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:05:49.188757   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:49.224963   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:49.225133   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:49.458781   18088 addons.go:495] Verifying addon gcp-auth=true in "addons-177895"
	I1205 06:05:49.460141   18088 out.go:179] * Verifying gcp-auth addon...
	I1205 06:05:49.462277   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 06:05:49.464121   18088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 06:05:49.464138   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:49.688926   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:49.724673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:49.724929   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:49.965123   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:50.188585   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:50.224599   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:50.224745   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:50.281258   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:50.464700   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:50.689373   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:50.724164   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:50.724374   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:50.965002   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:51.188622   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:51.224542   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:51.224707   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:51.465141   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:51.688496   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:51.724290   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:51.724451   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:51.965051   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:52.188550   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:52.224598   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:52.224737   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:52.281549   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:52.464718   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:52.689241   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:52.723881   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:52.724138   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:52.965023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:53.188269   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:53.224130   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:53.224272   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:53.465225   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:53.688505   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:53.724765   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:53.724882   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:53.964423   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:54.188922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:54.224714   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:54.224952   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:54.464533   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:54.688983   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:54.725060   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:54.725096   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:54.780749   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:54.965605   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:55.189150   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:55.225364   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:55.225446   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:55.464448   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:55.688882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:55.725599   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:55.725804   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:55.965366   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:56.188855   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:56.224877   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:56.224967   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:56.464847   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:56.688421   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:56.724349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:56.724404   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:56.781118   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:56.965154   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:57.188672   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:57.224619   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:57.224772   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:57.464663   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:57.689073   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:57.725142   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:57.725184   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:57.965066   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:58.188463   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:58.224349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:58.224589   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:58.465455   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:58.689236   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:58.723999   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:58.724102   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:58.964934   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:59.188163   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:59.225154   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:59.225313   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:59.281221   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:59.464466   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:59.688713   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:59.724666   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:59.724683   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:59.964307   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:00.188991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:00.224882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:00.225159   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.464933   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:00.688434   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:00.724233   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:00.724416   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.965240   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:01.189057   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:01.225191   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.225264   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:01.281544   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:01.465044   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:01.688564   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:01.724649   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.724730   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:01.965206   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:02.188618   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.224733   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.224923   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.465004   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:02.688887   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.724808   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.724955   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.964612   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.189021   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.225128   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.225214   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:03.465295   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.688912   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.724965   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.725024   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:03.781024   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:03.965065   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.188705   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.224474   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.224632   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.464376   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.688911   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.724818   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.724930   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.964975   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.188165   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.223959   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:05.224220   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.465285   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.688680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.724872   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.724876   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:06:05.781466   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:05.964705   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.188022   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.224859   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.224998   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.465515   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.689059   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.725032   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.725155   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.965046   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.188520   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.224518   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.224665   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.464956   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.688248   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.723982   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.724133   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.965169   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.188262   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.224177   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.224229   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:08.280877   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:08.465437   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.688688   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.724743   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.724847   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:08.964647   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.189023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.225071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.225195   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.465185   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.688593   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.724588   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.724613   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.964589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.188776   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.224707   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.224832   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:10.464410   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.688730   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.724641   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.724848   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:10.780501   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:10.964704   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.188830   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.224955   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.225139   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.465316   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.688673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.724799   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.724954   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.964852   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.188264   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.224008   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.224244   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:12.465225   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.688724   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.724741   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.724951   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:12.780672   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:12.964953   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.188227   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.224055   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.224156   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.464959   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.688209   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.724259   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.724263   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.965204   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.188530   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.224533   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.224663   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.464737   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.688007   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.724916   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.725077   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.965018   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.188449   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.224432   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.224610   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:15.281355   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:15.464589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.688956   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.724781   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.724982   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:15.964777   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.188023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.225136   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.225167   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.465693   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.688991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.724896   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.724927   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.964843   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.188132   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.224071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.224235   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:17.464661   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.689056   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.725134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.725160   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:17.781130   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:17.964490   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.188680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.224437   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.224606   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.464501   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.689003   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.728724   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.728787   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.965102   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.188298   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.224196   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.224395   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.465440   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.688928   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.724796   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.724900   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.964900   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.188144   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.225062   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.225100   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:20.280919   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:20.465399   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.688834   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.724874   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.725061   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:20.965076   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.188546   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.224691   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.224738   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.465313   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.688644   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.724788   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.724998   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.964934   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.188220   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.224071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.224267   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:22.281035   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:22.465543   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.690483   18088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:06:22.690512   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.725450   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:22.726650   18088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:06:22.726673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.780601   18088 node_ready.go:49] node "addons-177895" is "Ready"
	I1205 06:06:22.780624   18088 node_ready.go:38] duration metric: took 41.00193939s for node "addons-177895" to be "Ready" ...
	I1205 06:06:22.780636   18088 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:06:22.780675   18088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:06:22.798069   18088 api_server.go:72] duration metric: took 41.598504915s to wait for apiserver process to appear ...
	I1205 06:06:22.798096   18088 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:06:22.798123   18088 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 06:06:22.804098   18088 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 06:06:22.804848   18088 api_server.go:141] control plane version: v1.34.2
	I1205 06:06:22.804869   18088 api_server.go:131] duration metric: took 6.764721ms to wait for apiserver health ...
	I1205 06:06:22.804876   18088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:06:22.808144   18088 system_pods.go:59] 20 kube-system pods found
	I1205 06:06:22.808173   18088 system_pods.go:61] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:22.808180   18088 system_pods.go:61] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:22.808188   18088 system_pods.go:61] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:22.808194   18088 system_pods.go:61] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:22.808203   18088 system_pods.go:61] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:22.808208   18088 system_pods.go:61] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:22.808215   18088 system_pods.go:61] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:22.808218   18088 system_pods.go:61] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:22.808224   18088 system_pods.go:61] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:22.808229   18088 system_pods.go:61] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:22.808236   18088 system_pods.go:61] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:22.808239   18088 system_pods.go:61] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:22.808244   18088 system_pods.go:61] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:22.808249   18088 system_pods.go:61] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:22.808257   18088 system_pods.go:61] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:22.808262   18088 system_pods.go:61] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:22.808269   18088 system_pods.go:61] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:22.808274   18088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.808282   18088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.808287   18088 system_pods.go:61] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:22.808294   18088 system_pods.go:74] duration metric: took 3.41374ms to wait for pod list to return data ...
	I1205 06:06:22.808301   18088 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:06:22.809932   18088 default_sa.go:45] found service account: "default"
	I1205 06:06:22.809947   18088 default_sa.go:55] duration metric: took 1.639388ms for default service account to be created ...
	I1205 06:06:22.809954   18088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:06:22.812757   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:22.812786   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:22.812796   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:22.812809   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:22.812820   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:22.812829   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:22.812838   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:22.812848   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:22.812857   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:22.812866   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:22.812878   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:22.812886   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:22.812891   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:22.812903   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:22.812914   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:22.812931   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:22.812943   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:22.812951   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:22.812962   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.812974   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.812982   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:22.813000   18088 retry.go:31] will retry after 244.609337ms: missing components: kube-dns
	I1205 06:06:22.967015   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.068213   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.068253   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.068266   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:23.068277   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.068285   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.068293   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.068299   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.068312   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.068336   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.068347   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.068356   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.068361   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.068368   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.068377   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.068387   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.068395   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.068404   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.068412   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.068425   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.068434   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.068446   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:23.068467   18088 retry.go:31] will retry after 243.523752ms: missing components: kube-dns
	I1205 06:06:23.190510   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.225813   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.225854   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.327844   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.327882   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.327893   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:23.327904   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.327912   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.327921   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.327930   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.327937   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.327946   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.327953   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.327965   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.327970   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.327976   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.327987   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.327999   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.328010   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.328018   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.328026   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.328036   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.328044   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.328064   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:23.328083   18088 retry.go:31] will retry after 405.070616ms: missing components: kube-dns
	I1205 06:06:23.465719   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.688989   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.726689   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.726891   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.737349   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.737373   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.737379   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Running
	I1205 06:06:23.737386   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.737394   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.737400   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.737407   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.737411   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.737417   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.737420   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.737429   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.737436   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.737440   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.737448   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.737454   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.737461   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.737468   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.737475   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.737480   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.737490   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.737497   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Running
	I1205 06:06:23.737504   18088 system_pods.go:126] duration metric: took 927.545087ms to wait for k8s-apps to be running ...
	I1205 06:06:23.737513   18088 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:06:23.737550   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:06:23.774226   18088 system_svc.go:56] duration metric: took 36.702127ms WaitForService to wait for kubelet
	I1205 06:06:23.774254   18088 kubeadm.go:587] duration metric: took 42.574692838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:06:23.774276   18088 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:06:23.776970   18088 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 06:06:23.776996   18088 node_conditions.go:123] node cpu capacity is 8
	I1205 06:06:23.777016   18088 node_conditions.go:105] duration metric: took 2.734091ms to run NodePressure ...
	I1205 06:06:23.777031   18088 start.go:242] waiting for startup goroutines ...
	I1205 06:06:23.966175   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.189377   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.289995   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.290205   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.466000   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.689188   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.790560   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.790672   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.966060   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.189809   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.225277   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.225398   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.465035   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.689145   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.790071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.790145   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.965492   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.189802   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.225258   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.225354   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.464713   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.688699   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.789631   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.789669   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.965090   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.190029   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.225754   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.225792   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.465577   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.689637   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.725212   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.725230   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.964980   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.189269   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.225932   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.226015   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.465671   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.689654   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.790832   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.791031   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.965506   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.190149   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.225579   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.225613   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.465231   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.688922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.726777   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.727054   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.966086   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.189016   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.227163   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.227497   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.466894   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.690256   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.727477   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.728469   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.965461   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.190105   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.225624   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:31.225646   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.465480   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.689591   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.725194   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:31.725218   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.965991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.188717   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.225483   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:32.225547   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.465466   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.689490   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.725013   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:32.725174   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.964913   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.189433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.225131   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:33.225160   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.465877   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.689070   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.725962   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:33.726005   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.966201   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.189340   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.290170   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:34.290200   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.466178   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.689999   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.725075   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:34.725297   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.965059   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.189145   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.225881   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.225897   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:35.465609   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.689808   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.725316   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.725462   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:35.965179   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.189654   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.290344   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:36.290605   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.464817   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.688559   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.724644   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.724644   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:36.965593   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.189904   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.225113   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:37.225244   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.466887   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.688618   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.724942   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:37.725079   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.965826   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.188848   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.225256   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:38.225262   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.464883   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.688552   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.724547   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:38.724709   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.965969   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.189569   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.224433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:39.224489   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.464847   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.689663   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.725096   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:39.725166   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.966137   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.190011   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.226002   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:40.226169   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.465859   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.688904   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.725425   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:40.725515   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.964882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.189433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.224816   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:41.224987   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.465903   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.688529   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.724193   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:41.724403   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.964976   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.188993   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.225510   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.225621   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:42.464909   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.688607   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.724786   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:42.724883   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.965635   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.189425   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.224537   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:43.224547   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.464922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.688975   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.725563   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.725601   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:43.964914   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.191160   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.227365   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:44.227477   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.465556   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.689658   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.724957   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:44.725037   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.965725   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.190287   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.225468   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:45.225623   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.465294   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.689638   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.725038   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:45.725077   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.966127   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.189542   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.225158   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:46.225200   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.465480   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.689539   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.724475   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:46.724623   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.965896   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.189082   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.226127   18088 kapi.go:107] duration metric: took 1m4.504066087s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 06:06:47.226175   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:47.465752   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.689142   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.725933   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.074111   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.189349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.289385   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.464755   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.690091   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.725412   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.964726   18088 kapi.go:107] duration metric: took 59.502442894s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 06:06:48.966373   18088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-177895 cluster.
	I1205 06:06:48.967586   18088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 06:06:48.968679   18088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 06:06:49.189497   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.225657   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:49.689137   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.725805   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.189459   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.226209   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.689186   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.725301   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.189563   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.224865   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.688917   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.725446   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.190252   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.225853   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.688569   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.725278   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.189769   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.225102   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.689215   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.725684   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.188801   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.225021   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.689210   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.789424   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.190989   18088 kapi.go:107] duration metric: took 1m12.005191342s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 06:06:55.225845   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.784304   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.225776   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.725616   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.289450   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.725889   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.225180   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.725272   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:59.225220   18088 kapi.go:107] duration metric: took 1m16.503160217s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 06:06:59.226732   18088 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, metrics-server, yakd, cloud-spanner, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1205 06:06:59.227757   18088 addons.go:530] duration metric: took 1m18.028159047s for enable addons: enabled=[registry-creds inspektor-gadget storage-provisioner amd-gpu-device-plugin nvidia-device-plugin ingress-dns metrics-server yakd cloud-spanner storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1205 06:06:59.227793   18088 start.go:247] waiting for cluster config update ...
	I1205 06:06:59.227812   18088 start.go:256] writing updated cluster config ...
	I1205 06:06:59.228043   18088 ssh_runner.go:195] Run: rm -f paused
	I1205 06:06:59.231862   18088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:06:59.234553   18088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xlfl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.237718   18088 pod_ready.go:94] pod "coredns-66bc5c9577-xlfl4" is "Ready"
	I1205 06:06:59.237737   18088 pod_ready.go:86] duration metric: took 3.165751ms for pod "coredns-66bc5c9577-xlfl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.239383   18088 pod_ready.go:83] waiting for pod "etcd-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.242489   18088 pod_ready.go:94] pod "etcd-addons-177895" is "Ready"
	I1205 06:06:59.242511   18088 pod_ready.go:86] duration metric: took 3.110544ms for pod "etcd-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.243973   18088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.246910   18088 pod_ready.go:94] pod "kube-apiserver-addons-177895" is "Ready"
	I1205 06:06:59.246931   18088 pod_ready.go:86] duration metric: took 2.94163ms for pod "kube-apiserver-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.248480   18088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.635043   18088 pod_ready.go:94] pod "kube-controller-manager-addons-177895" is "Ready"
	I1205 06:06:59.635069   18088 pod_ready.go:86] duration metric: took 386.573508ms for pod "kube-controller-manager-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.856651   18088 pod_ready.go:83] waiting for pod "kube-proxy-gk8dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.235610   18088 pod_ready.go:94] pod "kube-proxy-gk8dq" is "Ready"
	I1205 06:07:00.235634   18088 pod_ready.go:86] duration metric: took 378.957923ms for pod "kube-proxy-gk8dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.435431   18088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.835115   18088 pod_ready.go:94] pod "kube-scheduler-addons-177895" is "Ready"
	I1205 06:07:00.835139   18088 pod_ready.go:86] duration metric: took 399.686441ms for pod "kube-scheduler-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.835150   18088 pod_ready.go:40] duration metric: took 1.603261281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:07:00.877671   18088 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 06:07:00.879692   18088 out.go:179] * Done! kubectl is now configured to use "addons-177895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.338402331Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-79jkg/POD" id=c5cc434c-d208-493f-9dfa-9363bb7065b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.338486309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.345472502Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-79jkg Namespace:default ID:7801fbd3a34bc5e149a598c0d20d8fe1464ab61163074c3e7c4d44973ce106cd UID:ff17225d-78dd-4232-a696-158bda68711d NetNS:/var/run/netns/e96662aa-56da-4d6b-b294-f0f74a638283 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000bd0d8}] Aliases:map[]}"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.345506642Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-79jkg to CNI network \"kindnet\" (type=ptp)"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.355420955Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-79jkg Namespace:default ID:7801fbd3a34bc5e149a598c0d20d8fe1464ab61163074c3e7c4d44973ce106cd UID:ff17225d-78dd-4232-a696-158bda68711d NetNS:/var/run/netns/e96662aa-56da-4d6b-b294-f0f74a638283 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000bd0d8}] Aliases:map[]}"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.355546847Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-79jkg for CNI network kindnet (type=ptp)"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.356408352Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.359993235Z" level=info msg="Ran pod sandbox 7801fbd3a34bc5e149a598c0d20d8fe1464ab61163074c3e7c4d44973ce106cd with infra container: default/hello-world-app-5d498dc89-79jkg/POD" id=c5cc434c-d208-493f-9dfa-9363bb7065b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.361245744Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d864595d-c1c8-4de2-bf70-1dad91424aea name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.361465947Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d864595d-c1c8-4de2-bf70-1dad91424aea name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.361574474Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d864595d-c1c8-4de2-bf70-1dad91424aea name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.362239343Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b6ea64a5-9dd8-4d7b-9317-5cee08a7a1e7 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:09:35 addons-177895 crio[774]: time="2025-12-05T06:09:35.37821178Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.133653298Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=b6ea64a5-9dd8-4d7b-9317-5cee08a7a1e7 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.134183591Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0db07ef1-1483-4b3a-9630-62d98dde40c3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.135618894Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d9ffcb38-3940-4b8e-b6d9-d654bf4c5d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.13862074Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-79jkg/hello-world-app" id=038b8465-9bd7-4606-9b9d-0b0f7a8c77f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.138746354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.144399723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.144620301Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e46caae97f53ff1027713a97728b339dbe5b7443ed6c1430499458efd51178b1/merged/etc/passwd: no such file or directory"
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.144655978Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e46caae97f53ff1027713a97728b339dbe5b7443ed6c1430499458efd51178b1/merged/etc/group: no such file or directory"
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.144937661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.182861432Z" level=info msg="Created container 32088a1feffe27f010ec391b016d9698388cf19572adbb94ec8e07a4f574cf16: default/hello-world-app-5d498dc89-79jkg/hello-world-app" id=038b8465-9bd7-4606-9b9d-0b0f7a8c77f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.183456681Z" level=info msg="Starting container: 32088a1feffe27f010ec391b016d9698388cf19572adbb94ec8e07a4f574cf16" id=210bf27b-24e0-4b3e-93b1-136511a9b2db name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:09:36 addons-177895 crio[774]: time="2025-12-05T06:09:36.185372809Z" level=info msg="Started container" PID=9317 containerID=32088a1feffe27f010ec391b016d9698388cf19572adbb94ec8e07a4f574cf16 description=default/hello-world-app-5d498dc89-79jkg/hello-world-app id=210bf27b-24e0-4b3e-93b1-136511a9b2db name=/runtime.v1.RuntimeService/StartContainer sandboxID=7801fbd3a34bc5e149a598c0d20d8fe1464ab61163074c3e7c4d44973ce106cd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	32088a1feffe2       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   7801fbd3a34bc       hello-world-app-5d498dc89-79jkg            default
	fb4f2232e6dff       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             56 seconds ago           Running             registry-creds                           0                   4375961753da1       registry-creds-764b6fb674-8p8pq            kube-system
	1ba44b44e58b7       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   ec580be2fc035       nginx                                      default
	a1b8cb88b9e01       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   aa38144f2c172       busybox                                    default
	e6340ed626c1a       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   31830ffd539a5       ingress-nginx-controller-6c8bf45fb-8r9xg   ingress-nginx
	16645d5e8e337       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	819ee604de0dc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	7897ed230bdcb       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	bd0232ddd5627       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	07cc0f5510b04       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             2 minutes ago            Exited              patch                                    2                   a00c1487e3639       ingress-nginx-admission-patch-98kcw        ingress-nginx
	71790cf94f6b0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   302816952d837       gadget-gb572                               gadget
	d658de91425e0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	ac7e74d074bd9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   40c735c75f040       gcp-auth-78565c9fb4-jpdgf                  gcp-auth
	4c91c5eca3759       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   8cd6deb660f5e       registry-proxy-gzlfd                       kube-system
	b1cef4ce17c14       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   884b5e57aaf60       nvidia-device-plugin-daemonset-vqq7b       kube-system
	3bcfb73c2da0e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	320976162b2e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   2 minutes ago            Exited              create                                   0                   79019fa95f2a2       ingress-nginx-admission-create-756km       ingress-nginx
	32a622051217c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              2 minutes ago            Running             yakd                                     0                   858664db72ceb       yakd-dashboard-5ff678cb9-qdmqt             yakd-dashboard
	1daa53d0ceb64       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago            Running             csi-attacher                             0                   267b0b56ad3bb       csi-hostpath-attacher-0                    kube-system
	a1990665675a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   dd4b53e83dc69       amd-gpu-device-plugin-tff2n                kube-system
	32921b8595d6e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   8cc12f4d28a93       snapshot-controller-7d9fbc56b8-h9khj       kube-system
	0be783dd8c5fd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   27bf4ff3feb66       snapshot-controller-7d9fbc56b8-d5g82       kube-system
	f88019728f44c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   cf025d22f2de6       kube-ingress-dns-minikube                  kube-system
	6e7946313d15a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   2ee456bf5aac8       csi-hostpath-resizer-0                     kube-system
	e90447f36f9cc       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   62f39f9b416ef       cloud-spanner-emulator-5bdddb765-7zxgt     default
	0038f726928bc       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   6bb7f9d40c09d       local-path-provisioner-648f6765c9-kq9cd    local-path-storage
	bc1820c39f391       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   a5ba51b34dce7       registry-6b586f9694-hcpm2                  kube-system
	eae7b2e3083fc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   8b5a3431e2e8d       metrics-server-85b7d694d7-7cspb            kube-system
	939f9276ecdd3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   d292dce4695d3       coredns-66bc5c9577-xlfl4                   kube-system
	fae790e0ec5bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   9e69085c7c02c       storage-provisioner                        kube-system
	e2c0cd58d28ef       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago            Running             kube-proxy                               0                   4033e9af17298       kube-proxy-gk8dq                           kube-system
	36b03b6292161       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   103bb854ba207       kindnet-n79ts                              kube-system
	d693c2ca57323       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   0983dd47daf69       kube-scheduler-addons-177895               kube-system
	88d316347724e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   09acc882071fe       kube-controller-manager-addons-177895      kube-system
	7e02812d9d790       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   7118545e98873       kube-apiserver-addons-177895               kube-system
	a744380007274       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   3070a2e0c5a0a       etcd-addons-177895                         kube-system
	
	
	==> coredns [939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7] <==
	[INFO] 10.244.0.20:33517 - 54082 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013656s
	[INFO] 10.244.0.20:45274 - 50201 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004916331s
	[INFO] 10.244.0.20:58363 - 33604 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005005132s
	[INFO] 10.244.0.20:33925 - 25611 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005222118s
	[INFO] 10.244.0.20:40134 - 53759 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00532459s
	[INFO] 10.244.0.20:39333 - 9813 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003914847s
	[INFO] 10.244.0.20:38762 - 42859 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004132652s
	[INFO] 10.244.0.20:56223 - 59393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001080804s
	[INFO] 10.244.0.20:40687 - 12728 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002015257s
	[INFO] 10.244.0.25:41077 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000215486s
	[INFO] 10.244.0.25:38249 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188968s
	[INFO] 10.244.0.31:45475 - 42167 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000173399s
	[INFO] 10.244.0.31:46203 - 40755 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000282464s
	[INFO] 10.244.0.31:36547 - 28274 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000090459s
	[INFO] 10.244.0.31:40354 - 12578 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000143137s
	[INFO] 10.244.0.31:60819 - 47601 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000115809s
	[INFO] 10.244.0.31:53534 - 57201 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000154901s
	[INFO] 10.244.0.31:59328 - 28141 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005378046s
	[INFO] 10.244.0.31:46834 - 54719 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006207935s
	[INFO] 10.244.0.31:58018 - 29904 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004462812s
	[INFO] 10.244.0.31:36456 - 24565 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005145197s
	[INFO] 10.244.0.31:41092 - 63990 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00427403s
	[INFO] 10.244.0.31:56462 - 21584 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.008358517s
	[INFO] 10.244.0.31:56133 - 18426 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00152577s
	[INFO] 10.244.0.31:36113 - 52897 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001614611s
	
	
	==> describe nodes <==
	Name:               addons-177895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-177895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=addons-177895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_05_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-177895
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-177895"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:05:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-177895
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:09:10 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:09:10 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:09:10 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:09:10 +0000   Fri, 05 Dec 2025 06:06:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-177895
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c5b2c12d-676e-4624-9c30-d03b99e0eb27
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  default                     cloud-spanner-emulator-5bdddb765-7zxgt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  default                     hello-world-app-5d498dc89-79jkg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-gb572                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  gcp-auth                    gcp-auth-78565c9fb4-jpdgf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-8r9xg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m54s
	  kube-system                 amd-gpu-device-plugin-tff2n                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 coredns-66bc5c9577-xlfl4                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m55s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 csi-hostpathplugin-gm8fx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 etcd-addons-177895                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m2s
	  kube-system                 kindnet-n79ts                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m55s
	  kube-system                 kube-apiserver-addons-177895                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-controller-manager-addons-177895       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-gk8dq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-scheduler-addons-177895                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 metrics-server-85b7d694d7-7cspb             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m54s
	  kube-system                 nvidia-device-plugin-daemonset-vqq7b        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 registry-6b586f9694-hcpm2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 registry-creds-764b6fb674-8p8pq             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 registry-proxy-gzlfd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-d5g82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 snapshot-controller-7d9fbc56b8-h9khj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  local-path-storage          local-path-provisioner-648f6765c9-kq9cd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qdmqt              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m53s  kube-proxy       
	  Normal  Starting                 4m1s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s   kubelet          Node addons-177895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s   kubelet          Node addons-177895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s   kubelet          Node addons-177895 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m56s  node-controller  Node addons-177895 event: Registered Node addons-177895 in Controller
	  Normal  NodeReady                3m14s  kubelet          Node addons-177895 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 5 06:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.022771] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023920] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023880] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +2.047782] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +4.032580] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +8.063178] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[ +16.381345] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[Dec 5 06:08] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	
	
	==> etcd [a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0] <==
	{"level":"warn","ts":"2025-12-05T06:05:32.902964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.908949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.915628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.921735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.927811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.934802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.958409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.964631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.972114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:33.015264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:43.548055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.426714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.435643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.450345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.456627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43082","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:06:38.948986Z","caller":"traceutil/trace.go:172","msg":"trace[1948693037] linearizableReadLoop","detail":"{readStateIndex:1085; appliedIndex:1085; }","duration":"119.98788ms","start":"2025-12-05T06:06:38.828984Z","end":"2025-12-05T06:06:38.948972Z","steps":["trace[1948693037] 'read index received'  (duration: 119.983333ms)","trace[1948693037] 'applied index is now lower than readState.Index'  (duration: 3.924µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T06:06:38.949483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.480851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-8tkdd\" limit:1 ","response":"range_response_count:1 size:4260"}
	{"level":"info","ts":"2025-12-05T06:06:38.949551Z","caller":"traceutil/trace.go:172","msg":"trace[378412408] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-8tkdd; range_end:; response_count:1; response_revision:1054; }","duration":"120.564055ms","start":"2025-12-05T06:06:38.828976Z","end":"2025-12-05T06:06:38.949540Z","steps":["trace[378412408] 'agreement among raft nodes before linearized reading'  (duration: 120.072916ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949572Z","caller":"traceutil/trace.go:172","msg":"trace[1644745860] transaction","detail":"{read_only:false; response_revision:1056; number_of_response:1; }","duration":"156.218749ms","start":"2025-12-05T06:06:38.793349Z","end":"2025-12-05T06:06:38.949568Z","steps":["trace[1644745860] 'process raft request'  (duration: 156.130092ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949566Z","caller":"traceutil/trace.go:172","msg":"trace[1332123197] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"162.495983ms","start":"2025-12-05T06:06:38.787061Z","end":"2025-12-05T06:06:38.949557Z","steps":["trace[1332123197] 'process raft request'  (duration: 162.036308ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949546Z","caller":"traceutil/trace.go:172","msg":"trace[1112850690] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"121.191274ms","start":"2025-12-05T06:06:38.828344Z","end":"2025-12-05T06:06:38.949536Z","steps":["trace[1112850690] 'process raft request'  (duration: 121.166424ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:06:48.072587Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.111713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:06:48.072733Z","caller":"traceutil/trace.go:172","msg":"trace[1232396964] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"108.262925ms","start":"2025-12-05T06:06:47.964452Z","end":"2025-12-05T06:06:48.072715Z","steps":["trace[1232396964] 'range keys from in-memory index tree'  (duration: 108.058193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:55.782735Z","caller":"traceutil/trace.go:172","msg":"trace[1034833694] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"107.501655ms","start":"2025-12-05T06:06:55.675210Z","end":"2025-12-05T06:06:55.782712Z","steps":["trace[1034833694] 'process raft request'  (duration: 107.316883ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:07:02.986496Z","caller":"traceutil/trace.go:172","msg":"trace[1413813968] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"100.098889ms","start":"2025-12-05T06:07:02.886383Z","end":"2025-12-05T06:07:02.986482Z","steps":["trace[1413813968] 'process raft request'  (duration: 100.02092ms)"],"step_count":1}
	
	
	==> gcp-auth [ac7e74d074bd9997be585172c57fb1a6c8161383dc7f811de09d617facf2a11a] <==
	2025/12/05 06:06:48 GCP Auth Webhook started!
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:10 Ready to marshal response ...
	2025/12/05 06:07:10 Ready to write response ...
	2025/12/05 06:07:19 Ready to marshal response ...
	2025/12/05 06:07:19 Ready to write response ...
	2025/12/05 06:07:21 Ready to marshal response ...
	2025/12/05 06:07:21 Ready to write response ...
	2025/12/05 06:07:21 Ready to marshal response ...
	2025/12/05 06:07:21 Ready to write response ...
	2025/12/05 06:07:30 Ready to marshal response ...
	2025/12/05 06:07:30 Ready to write response ...
	2025/12/05 06:07:36 Ready to marshal response ...
	2025/12/05 06:07:36 Ready to write response ...
	2025/12/05 06:08:03 Ready to marshal response ...
	2025/12/05 06:08:03 Ready to write response ...
	2025/12/05 06:09:35 Ready to marshal response ...
	2025/12/05 06:09:35 Ready to write response ...
	
	
	==> kernel <==
	 06:09:36 up 52 min,  0 user,  load average: 0.34, 0.72, 0.37
	Linux addons-177895 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25] <==
	I1205 06:07:32.383032       1 main.go:301] handling current node
	I1205 06:07:42.384141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:07:42.384178       1 main.go:301] handling current node
	I1205 06:07:52.384067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:07:52.384094       1 main.go:301] handling current node
	I1205 06:08:02.382971       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:02.383005       1 main.go:301] handling current node
	I1205 06:08:12.382763       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:12.382794       1 main.go:301] handling current node
	I1205 06:08:22.382865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:22.382894       1 main.go:301] handling current node
	I1205 06:08:32.390946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:32.390974       1 main.go:301] handling current node
	I1205 06:08:42.383225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:42.383271       1 main.go:301] handling current node
	I1205 06:08:52.389892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:08:52.389921       1 main.go:301] handling current node
	I1205 06:09:02.384847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:09:02.384875       1 main.go:301] handling current node
	I1205 06:09:12.383384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:09:12.383421       1 main.go:301] handling current node
	I1205 06:09:22.386096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:09:22.386127       1 main.go:301] handling current node
	I1205 06:09:32.391178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:09:32.391212       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8] <==
	W1205 06:06:10.456569       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1205 06:06:22.523421       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.523483       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.523624       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.523652       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.548276       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.548302       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.551264       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.551368       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.728089       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:25.728188       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:06:25.728249       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:06:25.728478       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.733536       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.754125       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	I1205 06:06:25.818741       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 06:07:08.570231       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57004: use of closed network connection
	E1205 06:07:08.706761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57034: use of closed network connection
	I1205 06:07:09.978282       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 06:07:10.153536       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.65.134"}
	I1205 06:07:45.273502       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 06:09:35.105700       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.161.192"}
	
	
	==> kube-controller-manager [88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50] <==
	I1205 06:05:40.413397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 06:05:40.413416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:05:40.414498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 06:05:40.414516       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:05:40.416812       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:05:40.416891       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 06:05:40.417989       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:05:40.419174       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:05:40.419238       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:05:40.419280       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:05:40.419290       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:05:40.419297       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:05:40.423388       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 06:05:40.425128       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-177895" podCIDRs=["10.244.0.0/24"]
	I1205 06:05:40.428351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 06:05:40.433557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1205 06:05:42.420641       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1205 06:06:10.421758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 06:06:10.421879       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1205 06:06:10.421925       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1205 06:06:10.442180       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1205 06:06:10.445558       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1205 06:06:10.522532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:06:10.546706       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:06:25.369138       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601] <==
	I1205 06:05:42.132244       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:05:42.334171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:05:42.440544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:05:42.440625       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:05:42.440754       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:05:42.573216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:05:42.573343       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:05:42.580660       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:05:42.586181       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:05:42.586211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:05:42.587906       1 config.go:200] "Starting service config controller"
	I1205 06:05:42.588041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:05:42.588565       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:05:42.589005       1 config.go:309] "Starting node config controller"
	I1205 06:05:42.589033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:05:42.589041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:05:42.589875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:05:42.587783       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:05:42.597437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:05:42.597463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 06:05:42.688670       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:05:42.690416       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03] <==
	E1205 06:05:33.417364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:05:33.417476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:05:33.417517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:05:33.417514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 06:05:33.417555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:33.417243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:05:33.417641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:05:33.417640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:05:33.417645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:05:33.417649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:05:33.417735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:05:33.417749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:05:33.417748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:05:33.417756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:05:33.417819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:05:33.417848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:05:34.272028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:05:34.323411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 06:05:34.324167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:05:34.328208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:05:34.406554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:05:34.522074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:05:34.550108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:34.557019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1205 06:05:37.511972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.779509    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-gcp-creds\") pod \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\" (UID: \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\") "
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.779565    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gntm2\" (UniqueName: \"kubernetes.io/projected/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-kube-api-access-gntm2\") pod \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\" (UID: \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\") "
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.779629    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "784da7bf-bc58-46ed-9edc-88c4c3e39ea6" (UID: "784da7bf-bc58-46ed-9edc-88c4c3e39ea6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.779683    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c1fd50aa-d1a0-11f0-a0c2-a20ea85f422b\") pod \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\" (UID: \"784da7bf-bc58-46ed-9edc-88c4c3e39ea6\") "
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.779930    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-gcp-creds\") on node \"addons-177895\" DevicePath \"\""
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.781712    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-kube-api-access-gntm2" (OuterVolumeSpecName: "kube-api-access-gntm2") pod "784da7bf-bc58-46ed-9edc-88c4c3e39ea6" (UID: "784da7bf-bc58-46ed-9edc-88c4c3e39ea6"). InnerVolumeSpecName "kube-api-access-gntm2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.782528    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^c1fd50aa-d1a0-11f0-a0c2-a20ea85f422b" (OuterVolumeSpecName: "task-pv-storage") pod "784da7bf-bc58-46ed-9edc-88c4c3e39ea6" (UID: "784da7bf-bc58-46ed-9edc-88c4c3e39ea6"). InnerVolumeSpecName "pvc-b9dce5bd-6bb7-4a19-921e-63309ca145a9". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.881091    1291 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-b9dce5bd-6bb7-4a19-921e-63309ca145a9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c1fd50aa-d1a0-11f0-a0c2-a20ea85f422b\") on node \"addons-177895\" "
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.881118    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gntm2\" (UniqueName: \"kubernetes.io/projected/784da7bf-bc58-46ed-9edc-88c4c3e39ea6-kube-api-access-gntm2\") on node \"addons-177895\" DevicePath \"\""
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.884989    1291 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-b9dce5bd-6bb7-4a19-921e-63309ca145a9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^c1fd50aa-d1a0-11f0-a0c2-a20ea85f422b") on node "addons-177895"
	Dec 05 06:08:10 addons-177895 kubelet[1291]: I1205 06:08:10.982106    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-b9dce5bd-6bb7-4a19-921e-63309ca145a9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c1fd50aa-d1a0-11f0-a0c2-a20ea85f422b\") on node \"addons-177895\" DevicePath \"\""
	Dec 05 06:08:11 addons-177895 kubelet[1291]: I1205 06:08:11.148139    1291 scope.go:117] "RemoveContainer" containerID="dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb"
	Dec 05 06:08:11 addons-177895 kubelet[1291]: I1205 06:08:11.157496    1291 scope.go:117] "RemoveContainer" containerID="dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb"
	Dec 05 06:08:11 addons-177895 kubelet[1291]: E1205 06:08:11.157880    1291 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb\": container with ID starting with dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb not found: ID does not exist" containerID="dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb"
	Dec 05 06:08:11 addons-177895 kubelet[1291]: I1205 06:08:11.157925    1291 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb"} err="failed to get container status \"dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb\": rpc error: code = NotFound desc = could not find container \"dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb\": container with ID starting with dd6bf136d8b32d59437c295899f493212c0d1ead0308dfe604eb3cdd01defccb not found: ID does not exist"
	Dec 05 06:08:11 addons-177895 kubelet[1291]: I1205 06:08:11.581876    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784da7bf-bc58-46ed-9edc-88c4c3e39ea6" path="/var/lib/kubelet/pods/784da7bf-bc58-46ed-9edc-88c4c3e39ea6/volumes"
	Dec 05 06:08:15 addons-177895 kubelet[1291]: I1205 06:08:15.579880    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gzlfd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:08:25 addons-177895 kubelet[1291]: E1205 06:08:25.544910    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-8p8pq" podUID="8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb"
	Dec 05 06:08:40 addons-177895 kubelet[1291]: I1205 06:08:40.261318    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-8p8pq" podStartSLOduration=178.194560046 podStartE2EDuration="2m59.261301836s" podCreationTimestamp="2025-12-05 06:05:41 +0000 UTC" firstStartedPulling="2025-12-05 06:08:38.599813348 +0000 UTC m=+183.101468122" lastFinishedPulling="2025-12-05 06:08:39.666555152 +0000 UTC m=+184.168209912" observedRunningTime="2025-12-05 06:08:40.260560673 +0000 UTC m=+184.762215455" watchObservedRunningTime="2025-12-05 06:08:40.261301836 +0000 UTC m=+184.762956617"
	Dec 05 06:08:54 addons-177895 kubelet[1291]: I1205 06:08:54.578460    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vqq7b" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:09:00 addons-177895 kubelet[1291]: I1205 06:09:00.579356    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-tff2n" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:09:30 addons-177895 kubelet[1291]: I1205 06:09:30.578832    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gzlfd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:09:35 addons-177895 kubelet[1291]: I1205 06:09:35.140551    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cv2c\" (UniqueName: \"kubernetes.io/projected/ff17225d-78dd-4232-a696-158bda68711d-kube-api-access-5cv2c\") pod \"hello-world-app-5d498dc89-79jkg\" (UID: \"ff17225d-78dd-4232-a696-158bda68711d\") " pod="default/hello-world-app-5d498dc89-79jkg"
	Dec 05 06:09:35 addons-177895 kubelet[1291]: I1205 06:09:35.140596    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ff17225d-78dd-4232-a696-158bda68711d-gcp-creds\") pod \"hello-world-app-5d498dc89-79jkg\" (UID: \"ff17225d-78dd-4232-a696-158bda68711d\") " pod="default/hello-world-app-5d498dc89-79jkg"
	Dec 05 06:09:36 addons-177895 kubelet[1291]: I1205 06:09:36.449258    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-79jkg" podStartSLOduration=0.676151934 podStartE2EDuration="1.449225765s" podCreationTimestamp="2025-12-05 06:09:35 +0000 UTC" firstStartedPulling="2025-12-05 06:09:35.36192878 +0000 UTC m=+239.863583541" lastFinishedPulling="2025-12-05 06:09:36.135002594 +0000 UTC m=+240.636657372" observedRunningTime="2025-12-05 06:09:36.448599188 +0000 UTC m=+240.950253970" watchObservedRunningTime="2025-12-05 06:09:36.449225765 +0000 UTC m=+240.950880546"
	
	
	==> storage-provisioner [fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce] <==
	W1205 06:09:11.611908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:13.614756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:13.618029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:15.620215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:15.624004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:17.627100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:17.630538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:19.633218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:19.636821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:21.640216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:21.645874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:23.648828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:23.653439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:25.656300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:25.659681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:27.662265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:27.668660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:29.671193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:29.674749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:31.677081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:31.680565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:33.683346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:33.686789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:35.689979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:09:35.695183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-177895 -n addons-177895
helpers_test.go:269: (dbg) Run:  kubectl --context addons-177895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-177895 describe pod ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-177895 describe pod ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw: exit status 1 (53.40291ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-756km" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-98kcw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-177895 describe pod ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (235.083624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:09:37.444843   32216 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:09:37.445019   32216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:09:37.445031   32216 out.go:374] Setting ErrFile to fd 2...
	I1205 06:09:37.445038   32216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:09:37.445274   32216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:09:37.445582   32216 mustload.go:66] Loading cluster: addons-177895
	I1205 06:09:37.445927   32216 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:09:37.445947   32216 addons.go:622] checking whether the cluster is paused
	I1205 06:09:37.446048   32216 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:09:37.446066   32216 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:09:37.446454   32216 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:09:37.463865   32216 ssh_runner.go:195] Run: systemctl --version
	I1205 06:09:37.463935   32216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:09:37.480488   32216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:09:37.576280   32216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:09:37.576385   32216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:09:37.602795   32216 cri.go:89] found id: "fb4f2232e6dff43766a5cfa6e044f0925c21e8e7773b42b39dde6872a67db3d5"
	I1205 06:09:37.602819   32216 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:09:37.602824   32216 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:09:37.602828   32216 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:09:37.602831   32216 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:09:37.602835   32216 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:09:37.602837   32216 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:09:37.602840   32216 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:09:37.602843   32216 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:09:37.602849   32216 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:09:37.602852   32216 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:09:37.602855   32216 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:09:37.602857   32216 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:09:37.602860   32216 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:09:37.602863   32216 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:09:37.602869   32216 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:09:37.602875   32216 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:09:37.602879   32216 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:09:37.602883   32216 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:09:37.602885   32216 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:09:37.602891   32216 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:09:37.602894   32216 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:09:37.602897   32216 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:09:37.602900   32216 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:09:37.602902   32216 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:09:37.602905   32216 cri.go:89] found id: ""
	I1205 06:09:37.602942   32216 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:09:37.616483   32216 out.go:203] 
	W1205 06:09:37.617563   32216 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:09:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:09:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:09:37.617587   32216 out.go:285] * 
	* 
	W1205 06:09:37.620872   32216 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:09:37.621947   32216 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable ingress --alsologtostderr -v=1: exit status 11 (234.633337ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:09:37.678082   32280 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:09:37.678223   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:09:37.678233   32280 out.go:374] Setting ErrFile to fd 2...
	I1205 06:09:37.678237   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:09:37.678821   32280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:09:37.679378   32280 mustload.go:66] Loading cluster: addons-177895
	I1205 06:09:37.679962   32280 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:09:37.679985   32280 addons.go:622] checking whether the cluster is paused
	I1205 06:09:37.680071   32280 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:09:37.680086   32280 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:09:37.680444   32280 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:09:37.697031   32280 ssh_runner.go:195] Run: systemctl --version
	I1205 06:09:37.697090   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:09:37.714656   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:09:37.810409   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:09:37.810476   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:09:37.838788   32280 cri.go:89] found id: "fb4f2232e6dff43766a5cfa6e044f0925c21e8e7773b42b39dde6872a67db3d5"
	I1205 06:09:37.838810   32280 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:09:37.838815   32280 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:09:37.838818   32280 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:09:37.838820   32280 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:09:37.838824   32280 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:09:37.838827   32280 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:09:37.838829   32280 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:09:37.838836   32280 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:09:37.838846   32280 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:09:37.838854   32280 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:09:37.838858   32280 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:09:37.838860   32280 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:09:37.838863   32280 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:09:37.838866   32280 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:09:37.838871   32280 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:09:37.838874   32280 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:09:37.838879   32280 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:09:37.838881   32280 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:09:37.838884   32280 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:09:37.838890   32280 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:09:37.838892   32280 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:09:37.838895   32280 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:09:37.838897   32280 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:09:37.838900   32280 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:09:37.838902   32280 cri.go:89] found id: ""
	I1205 06:09:37.838940   32280 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:09:37.851948   32280 out.go:203] 
	W1205 06:09:37.852992   32280 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:09:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:09:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:09:37.853014   32280 out.go:285] * 
	* 
	W1205 06:09:37.856057   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:09:37.857083   32280 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gb572" [c7b2489f-835f-4f51-990a-01d87747c9fc] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003631406s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (235.630239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:33.085402   29115 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:33.086139   29115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:33.086150   29115 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:33.086154   29115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:33.086313   29115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:33.086562   29115 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:33.086919   29115 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:33.086937   29115 addons.go:622] checking whether the cluster is paused
	I1205 06:07:33.087014   29115 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:33.087025   29115 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:33.087367   29115 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:33.107115   29115 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:33.107166   29115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:33.124344   29115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:33.220382   29115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:33.220462   29115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:33.246207   29115 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:33.246227   29115 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:33.246231   29115 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:33.246234   29115 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:33.246237   29115 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:33.246241   29115 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:33.246244   29115 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:33.246249   29115 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:33.246253   29115 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:33.246279   29115 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:33.246313   29115 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:33.246344   29115 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:33.246353   29115 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:33.246358   29115 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:33.246365   29115 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:33.246371   29115 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:33.246374   29115 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:33.246379   29115 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:33.246381   29115 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:33.246386   29115 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:33.246390   29115 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:33.246394   29115 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:33.246399   29115 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:33.246404   29115 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:33.246408   29115 cri.go:89] found id: ""
	I1205 06:07:33.246460   29115 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:33.259705   29115 out.go:203] 
	W1205 06:07:33.260818   29115 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:33.260838   29115 out.go:285] * 
	* 
	W1205 06:07:33.263878   29115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:33.265100   29115 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.084149ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002722128s
addons_test.go:463: (dbg) Run:  kubectl --context addons-177895 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (233.372253ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:15.065875   27633 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:15.066137   27633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:15.066146   27633 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:15.066150   27633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:15.066312   27633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:15.066534   27633 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:15.066834   27633 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:15.066853   27633 addons.go:622] checking whether the cluster is paused
	I1205 06:07:15.066932   27633 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:15.066946   27633 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:15.067270   27633 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:15.084515   27633 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:15.084563   27633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:15.101704   27633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:15.197430   27633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:15.197504   27633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:15.224721   27633 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:15.224753   27633 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:15.224758   27633 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:15.224761   27633 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:15.224764   27633 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:15.224768   27633 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:15.224771   27633 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:15.224774   27633 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:15.224777   27633 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:15.224785   27633 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:15.224788   27633 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:15.224790   27633 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:15.224793   27633 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:15.224796   27633 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:15.224799   27633 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:15.224806   27633 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:15.224811   27633 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:15.224816   27633 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:15.224818   27633 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:15.224825   27633 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:15.224828   27633 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:15.224831   27633 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:15.224833   27633 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:15.224836   27633 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:15.224839   27633 cri.go:89] found id: ""
	I1205 06:07:15.224881   27633 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:15.238025   27633 out.go:203] 
	W1205 06:07:15.239097   27633 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:15.239112   27633 out.go:285] * 
	* 
	W1205 06:07:15.242029   27633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:15.243171   27633 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 06:07:22.925307   16314 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 06:07:22.928463   16314 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 06:07:22.928487   16314 kapi.go:107] duration metric: took 3.188435ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.20345ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-177895 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-177895 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [cbebae01-ffcc-41c7-9eca-b57561829036] Pending
helpers_test.go:352: "task-pv-pod" [cbebae01-ffcc-41c7-9eca-b57561829036] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [cbebae01-ffcc-41c7-9eca-b57561829036] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003115019s
addons_test.go:572: (dbg) Run:  kubectl --context addons-177895 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-177895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-177895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-177895 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-177895 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-177895 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-177895 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [784da7bf-bc58-46ed-9edc-88c4c3e39ea6] Pending
helpers_test.go:352: "task-pv-pod-restore" [784da7bf-bc58-46ed-9edc-88c4c3e39ea6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [784da7bf-bc58-46ed-9edc-88c4c3e39ea6] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003463125s
addons_test.go:614: (dbg) Run:  kubectl --context addons-177895 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-177895 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-177895 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (233.125512ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:08:11.530052   30160 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:08:11.530198   30160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:08:11.530207   30160 out.go:374] Setting ErrFile to fd 2...
	I1205 06:08:11.530210   30160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:08:11.530381   30160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:08:11.530620   30160 mustload.go:66] Loading cluster: addons-177895
	I1205 06:08:11.530907   30160 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:08:11.530924   30160 addons.go:622] checking whether the cluster is paused
	I1205 06:08:11.531001   30160 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:08:11.531014   30160 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:08:11.531355   30160 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:08:11.548527   30160 ssh_runner.go:195] Run: systemctl --version
	I1205 06:08:11.548607   30160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:08:11.564240   30160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:08:11.662613   30160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:08:11.662689   30160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:08:11.690140   30160 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:08:11.690160   30160 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:08:11.690165   30160 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:08:11.690171   30160 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:08:11.690175   30160 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:08:11.690181   30160 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:08:11.690185   30160 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:08:11.690190   30160 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:08:11.690195   30160 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:08:11.690204   30160 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:08:11.690213   30160 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:08:11.690217   30160 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:08:11.690225   30160 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:08:11.690229   30160 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:08:11.690235   30160 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:08:11.690240   30160 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:08:11.690245   30160 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:08:11.690250   30160 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:08:11.690253   30160 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:08:11.690255   30160 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:08:11.690262   30160 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:08:11.690264   30160 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:08:11.690267   30160 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:08:11.690270   30160 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:08:11.690272   30160 cri.go:89] found id: ""
	I1205 06:08:11.690307   30160 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:08:11.703458   30160 out.go:203] 
	W1205 06:08:11.704626   30160 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:08:11.704640   30160 out.go:285] * 
	* 
	W1205 06:08:11.707609   30160 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:08:11.708846   30160 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (235.269118ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:08:11.764396   30236 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:08:11.764531   30236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:08:11.764540   30236 out.go:374] Setting ErrFile to fd 2...
	I1205 06:08:11.764544   30236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:08:11.764754   30236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:08:11.764977   30236 mustload.go:66] Loading cluster: addons-177895
	I1205 06:08:11.765254   30236 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:08:11.765269   30236 addons.go:622] checking whether the cluster is paused
	I1205 06:08:11.765386   30236 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:08:11.765406   30236 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:08:11.765806   30236 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:08:11.783224   30236 ssh_runner.go:195] Run: systemctl --version
	I1205 06:08:11.783275   30236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:08:11.799085   30236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:08:11.895672   30236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:08:11.895731   30236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:08:11.924260   30236 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:08:11.924279   30236 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:08:11.924284   30236 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:08:11.924287   30236 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:08:11.924290   30236 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:08:11.924294   30236 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:08:11.924296   30236 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:08:11.924299   30236 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:08:11.924302   30236 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:08:11.924307   30236 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:08:11.924310   30236 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:08:11.924313   30236 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:08:11.924316   30236 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:08:11.924319   30236 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:08:11.924347   30236 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:08:11.924358   30236 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:08:11.924367   30236 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:08:11.924372   30236 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:08:11.924377   30236 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:08:11.924381   30236 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:08:11.924386   30236 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:08:11.924389   30236 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:08:11.924391   30236 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:08:11.924394   30236 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:08:11.924397   30236 cri.go:89] found id: ""
	I1205 06:08:11.924438   30236 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:08:11.938062   30236 out.go:203] 
	W1205 06:08:11.939308   30236 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:08:11.939352   30236 out.go:285] * 
	* 
	W1205 06:08:11.942945   30236 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:08:11.944169   30236 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-177895 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-177895 --alsologtostderr -v=1: exit status 11 (239.860108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:09.008196   26311 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:09.008547   26311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:09.008562   26311 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:09.008569   26311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:09.008857   26311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:09.009164   26311 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:09.009607   26311 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:09.009628   26311 addons.go:622] checking whether the cluster is paused
	I1205 06:07:09.009711   26311 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:09.009726   26311 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:09.010107   26311 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:09.028240   26311 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:09.028278   26311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:09.044373   26311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:09.140279   26311 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:09.140375   26311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:09.167869   26311 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:09.167891   26311 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:09.167896   26311 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:09.167899   26311 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:09.167902   26311 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:09.167906   26311 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:09.167909   26311 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:09.167912   26311 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:09.167914   26311 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:09.167924   26311 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:09.167928   26311 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:09.167932   26311 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:09.167937   26311 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:09.167945   26311 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:09.167950   26311 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:09.167969   26311 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:09.167977   26311 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:09.167982   26311 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:09.167985   26311 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:09.167987   26311 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:09.167990   26311 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:09.167992   26311 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:09.167995   26311 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:09.167997   26311 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:09.168000   26311 cri.go:89] found id: ""
	I1205 06:07:09.168045   26311 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:09.181371   26311 out.go:203] 
	W1205 06:07:09.182638   26311 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:09.182662   26311 out.go:285] * 
	* 
	W1205 06:07:09.185553   26311 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:09.186851   26311 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-177895 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-177895
helpers_test.go:243: (dbg) docker inspect addons-177895:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645",
	        "Created": "2025-12-05T06:05:20.814441685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:05:20.844462315Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/hosts",
	        "LogPath": "/var/lib/docker/containers/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645/ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645-json.log",
	        "Name": "/addons-177895",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-177895:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-177895",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed37239a37c9a4984b335edabf30cec29713f3b4fc5ee5bc7130d375d7155645",
	                "LowerDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/527996caf9ce51538de51edf898879f8e40e85f245ffd1a675545ee5e06789d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-177895",
	                "Source": "/var/lib/docker/volumes/addons-177895/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-177895",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-177895",
	                "name.minikube.sigs.k8s.io": "addons-177895",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5ccf52cc4eea1f5162c934809d25e5eb4739fe77f52933ac0a60ea4a4d077b2c",
	            "SandboxKey": "/var/run/docker/netns/5ccf52cc4eea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-177895": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb9fdd45e8b65c8a9fe9be25b359f6f1c5cf5d1ed8bbc11638339eb81ec8d245",
	                    "EndpointID": "502c6638825bdf5815a6cd34a702ae716948c0c6e6ead2573795f4fa79e8b25d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "7a:ae:93:f6:5b:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-177895",
	                        "ed37239a37c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-177895 -n addons-177895
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-177895 logs -n 25: (1.082334532s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-991192   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-991192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-991192   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ -o=json --download-only -p download-only-402726 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-402726   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-402726                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-402726   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ -o=json --download-only -p download-only-500949 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-500949   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-500949                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-500949   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-991192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-991192   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-402726                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-402726   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-500949                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-500949   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ --download-only -p download-docker-737782 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-737782 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ -p download-docker-737782                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-737782 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ --download-only -p binary-mirror-565262 --alsologtostderr --binary-mirror http://127.0.0.1:40985 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-565262   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ -p binary-mirror-565262                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-565262   │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ addons  │ disable dashboard -p addons-177895                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ addons  │ enable dashboard -p addons-177895                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ start   │ -p addons-177895 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:07 UTC │
	│ addons  │ addons-177895 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ addons-177895 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	│ addons  │ enable headlamp -p addons-177895 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-177895          │ jenkins │ v1.37.0 │ 05 Dec 25 06:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:57.860254   18088 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:57.860361   18088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:57.860370   18088 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:57.860374   18088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:57.860560   18088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:04:57.861058   18088 out.go:368] Setting JSON to false
	I1205 06:04:57.861830   18088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2842,"bootTime":1764911856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:57.861875   18088 start.go:143] virtualization: kvm guest
	I1205 06:04:57.863389   18088 out.go:179] * [addons-177895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:04:57.864456   18088 notify.go:221] Checking for updates...
	I1205 06:04:57.864473   18088 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:04:57.865490   18088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:57.866497   18088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:04:57.867460   18088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:04:57.868466   18088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:04:57.869420   18088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:04:57.870585   18088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:57.891960   18088 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:04:57.892090   18088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:57.945541   18088 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:57.936959887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:57.945645   18088 docker.go:319] overlay module found
	I1205 06:04:57.947315   18088 out.go:179] * Using the docker driver based on user configuration
	I1205 06:04:57.948338   18088 start.go:309] selected driver: docker
	I1205 06:04:57.948351   18088 start.go:927] validating driver "docker" against <nil>
	I1205 06:04:57.948361   18088 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:04:57.948902   18088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:58.000191   18088 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:57.990740778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:58.000347   18088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:58.000554   18088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:04:58.002101   18088 out.go:179] * Using Docker driver with root privileges
	I1205 06:04:58.003167   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:04:58.003221   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:04:58.003231   18088 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:04:58.003279   18088 start.go:353] cluster config:
	{Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1205 06:04:58.004405   18088 out.go:179] * Starting "addons-177895" primary control-plane node in "addons-177895" cluster
	I1205 06:04:58.005347   18088 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:04:58.006397   18088 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:04:58.007347   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:04:58.007373   18088 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 06:04:58.007378   18088 cache.go:65] Caching tarball of preloaded images
	I1205 06:04:58.007436   18088 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:04:58.007450   18088 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 06:04:58.007458   18088 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:04:58.007810   18088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json ...
	I1205 06:04:58.007835   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json: {Name:mkfe13a4152566762b6d1f392180f8bb40fb4cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:04:58.022711   18088 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:04:58.022812   18088 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:04:58.022830   18088 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1205 06:04:58.022835   18088 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1205 06:04:58.022845   18088 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1205 06:04:58.022849   18088 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1205 06:05:09.857354   18088 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1205 06:05:09.857386   18088 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:05:09.857423   18088 start.go:360] acquireMachinesLock for addons-177895: {Name:mkcd2447083fe8b63b568f53de9d9a8d6faab33c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:09.857527   18088 start.go:364] duration metric: took 84.296µs to acquireMachinesLock for "addons-177895"
	I1205 06:05:09.857551   18088 start.go:93] Provisioning new machine with config: &{Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:09.857618   18088 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:05:09.859654   18088 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 06:05:09.859867   18088 start.go:159] libmachine.API.Create for "addons-177895" (driver="docker")
	I1205 06:05:09.859896   18088 client.go:173] LocalClient.Create starting
	I1205 06:05:09.859980   18088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 06:05:10.160427   18088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 06:05:10.275132   18088 cli_runner.go:164] Run: docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:05:10.292250   18088 cli_runner.go:211] docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:05:10.292319   18088 network_create.go:284] running [docker network inspect addons-177895] to gather additional debugging logs...
	I1205 06:05:10.292352   18088 cli_runner.go:164] Run: docker network inspect addons-177895
	W1205 06:05:10.306828   18088 cli_runner.go:211] docker network inspect addons-177895 returned with exit code 1
	I1205 06:05:10.306850   18088 network_create.go:287] error running [docker network inspect addons-177895]: docker network inspect addons-177895: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-177895 not found
	I1205 06:05:10.306860   18088 network_create.go:289] output of [docker network inspect addons-177895]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-177895 not found
	
	** /stderr **
	I1205 06:05:10.306954   18088 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:05:10.322090   18088 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e402b0}
	I1205 06:05:10.322131   18088 network_create.go:124] attempt to create docker network addons-177895 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:05:10.322167   18088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-177895 addons-177895
	I1205 06:05:10.365500   18088 network_create.go:108] docker network addons-177895 192.168.49.0/24 created
	I1205 06:05:10.365532   18088 kic.go:121] calculated static IP "192.168.49.2" for the "addons-177895" container
	I1205 06:05:10.365581   18088 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:05:10.380065   18088 cli_runner.go:164] Run: docker volume create addons-177895 --label name.minikube.sigs.k8s.io=addons-177895 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:05:10.396105   18088 oci.go:103] Successfully created a docker volume addons-177895
	I1205 06:05:10.396173   18088 cli_runner.go:164] Run: docker run --rm --name addons-177895-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --entrypoint /usr/bin/test -v addons-177895:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:05:17.028448   18088 cli_runner.go:217] Completed: docker run --rm --name addons-177895-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --entrypoint /usr/bin/test -v addons-177895:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (6.632240129s)
	I1205 06:05:17.028475   18088 oci.go:107] Successfully prepared a docker volume addons-177895
	I1205 06:05:17.028510   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:17.028519   18088 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 06:05:17.028569   18088 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-177895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 06:05:20.749950   18088 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-177895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.721343207s)
	I1205 06:05:20.749977   18088 kic.go:203] duration metric: took 3.721455547s to extract preloaded images to volume ...
	W1205 06:05:20.750058   18088 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 06:05:20.750087   18088 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 06:05:20.750120   18088 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:05:20.799903   18088 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-177895 --name addons-177895 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177895 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-177895 --network addons-177895 --ip 192.168.49.2 --volume addons-177895:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:05:21.080062   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Running}}
	I1205 06:05:21.099753   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.117471   18088 cli_runner.go:164] Run: docker exec addons-177895 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:05:21.166451   18088 oci.go:144] the created container "addons-177895" has a running status.
	I1205 06:05:21.166495   18088 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa...
	I1205 06:05:21.210424   18088 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:05:21.237703   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.254895   18088 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:05:21.254919   18088 kic_runner.go:114] Args: [docker exec --privileged addons-177895 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:05:21.294146   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:21.316310   18088 machine.go:94] provisionDockerMachine start ...
	I1205 06:05:21.316413   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:21.334757   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:21.335090   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:21.335113   18088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:05:21.336332   18088 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45030->127.0.0.1:32768: read: connection reset by peer
	I1205 06:05:24.471512   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-177895
	
	I1205 06:05:24.471541   18088 ubuntu.go:182] provisioning hostname "addons-177895"
	I1205 06:05:24.471609   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.488478   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:24.488690   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:24.488706   18088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-177895 && echo "addons-177895" | sudo tee /etc/hostname
	I1205 06:05:24.632219   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-177895
	
	I1205 06:05:24.632291   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.650316   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:24.650530   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:24.650546   18088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-177895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-177895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-177895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:05:24.784133   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:05:24.784158   18088 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 06:05:24.784175   18088 ubuntu.go:190] setting up certificates
	I1205 06:05:24.784184   18088 provision.go:84] configureAuth start
	I1205 06:05:24.784229   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:24.801499   18088 provision.go:143] copyHostCerts
	I1205 06:05:24.801558   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 06:05:24.801660   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 06:05:24.801724   18088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 06:05:24.801773   18088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.addons-177895 san=[127.0.0.1 192.168.49.2 addons-177895 localhost minikube]
	I1205 06:05:24.874981   18088 provision.go:177] copyRemoteCerts
	I1205 06:05:24.875025   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:05:24.875055   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:24.891364   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:24.987016   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:05:25.004372   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 06:05:25.019563   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:05:25.034636   18088 provision.go:87] duration metric: took 250.44034ms to configureAuth
	I1205 06:05:25.034657   18088 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:05:25.034817   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:25.034944   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.051721   18088 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:25.051910   18088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1205 06:05:25.051926   18088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:05:25.315463   18088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:05:25.315494   18088 machine.go:97] duration metric: took 3.999151809s to provisionDockerMachine
	I1205 06:05:25.315508   18088 client.go:176] duration metric: took 15.455605099s to LocalClient.Create
	I1205 06:05:25.315530   18088 start.go:167] duration metric: took 15.455663009s to libmachine.API.Create "addons-177895"
	I1205 06:05:25.315539   18088 start.go:293] postStartSetup for "addons-177895" (driver="docker")
	I1205 06:05:25.315551   18088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:05:25.315620   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:05:25.315665   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.332225   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.429977   18088 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:05:25.433216   18088 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:05:25.433243   18088 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:05:25.433255   18088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 06:05:25.433307   18088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 06:05:25.433357   18088 start.go:296] duration metric: took 117.811078ms for postStartSetup
	I1205 06:05:25.433624   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:25.449946   18088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/config.json ...
	I1205 06:05:25.450167   18088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:05:25.450216   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.465934   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.558475   18088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:05:25.562461   18088 start.go:128] duration metric: took 15.704830346s to createHost
	I1205 06:05:25.562481   18088 start.go:83] releasing machines lock for "addons-177895", held for 15.704941518s
	I1205 06:05:25.562541   18088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177895
	I1205 06:05:25.578678   18088 ssh_runner.go:195] Run: cat /version.json
	I1205 06:05:25.578715   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.578819   18088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:05:25.578896   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:25.595764   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.597071   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:25.741239   18088 ssh_runner.go:195] Run: systemctl --version
	I1205 06:05:25.746669   18088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:05:25.777551   18088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:05:25.781554   18088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:05:25.781613   18088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:05:25.805621   18088 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 06:05:25.805637   18088 start.go:496] detecting cgroup driver to use...
	I1205 06:05:25.805665   18088 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 06:05:25.805710   18088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:05:25.819750   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:05:25.830428   18088 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:05:25.830472   18088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:05:25.844775   18088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:05:25.859984   18088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:05:25.932481   18088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:05:26.016627   18088 docker.go:234] disabling docker service ...
	I1205 06:05:26.016677   18088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:05:26.032727   18088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:05:26.043951   18088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:05:26.121080   18088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:05:26.197246   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:05:26.208144   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:05:26.220436   18088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:05:26.220491   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.229603   18088 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 06:05:26.229651   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.237354   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.244903   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.252275   18088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:05:26.259259   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.267011   18088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.278815   18088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:26.286437   18088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:05:26.292861   18088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 06:05:26.292900   18088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 06:05:26.303305   18088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:05:26.310441   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:26.384005   18088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:05:26.505267   18088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:05:26.505394   18088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:05:26.508997   18088 start.go:564] Will wait 60s for crictl version
	I1205 06:05:26.509056   18088 ssh_runner.go:195] Run: which crictl
	I1205 06:05:26.512166   18088 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:05:26.535096   18088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:05:26.535183   18088 ssh_runner.go:195] Run: crio --version
	I1205 06:05:26.560220   18088 ssh_runner.go:195] Run: crio --version
	I1205 06:05:26.586801   18088 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 06:05:26.587892   18088 cli_runner.go:164] Run: docker network inspect addons-177895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:05:26.603628   18088 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:05:26.607272   18088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:26.616591   18088 kubeadm.go:884] updating cluster {Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:05:26.616700   18088 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:26.616750   18088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:26.644770   18088 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:05:26.644786   18088 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:05:26.644823   18088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:26.667237   18088 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:05:26.667256   18088 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:05:26.667266   18088 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1205 06:05:26.667407   18088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-177895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:05:26.667499   18088 ssh_runner.go:195] Run: crio config
	I1205 06:05:26.709743   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:05:26.709764   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:05:26.709785   18088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:05:26.709813   18088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-177895 NodeName:addons-177895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:05:26.709940   18088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-177895"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:05:26.710003   18088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:05:26.717312   18088 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:05:26.717375   18088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:05:26.724240   18088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 06:05:26.735707   18088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:05:26.749189   18088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1205 06:05:26.760170   18088 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:05:26.763242   18088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:26.771930   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:26.846421   18088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:05:26.868364   18088 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895 for IP: 192.168.49.2
	I1205 06:05:26.868381   18088 certs.go:195] generating shared ca certs ...
	I1205 06:05:26.868395   18088 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.868504   18088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 06:05:26.967363   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt ...
	I1205 06:05:26.967386   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt: {Name:mk9820ca0baeabc29c6b7a204a5424632bc7dee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.967531   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key ...
	I1205 06:05:26.967543   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key: {Name:mkf2cb335d8447035dc4c895cc3dcd92e8d7756b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:26.967619   18088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 06:05:27.028983   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt ...
	I1205 06:05:27.029003   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt: {Name:mka3d3c95e6815223c59da77efa96499ba48ea47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.029122   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key ...
	I1205 06:05:27.029133   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key: {Name:mk1e4251ab00860b186400fc98ca84639f085626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.029198   18088 certs.go:257] generating profile certs ...
	I1205 06:05:27.029256   18088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key
	I1205 06:05:27.029270   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt with IP's: []
	I1205 06:05:27.084580   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt ...
	I1205 06:05:27.084601   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: {Name:mk4b7ba93006a6a9f124b381c8f215dd5347c42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.084724   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key ...
	I1205 06:05:27.084735   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.key: {Name:mk4b756913f387647876204f5b100305a846d3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.084891   18088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508
	I1205 06:05:27.084922   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:05:27.183428   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 ...
	I1205 06:05:27.183449   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508: {Name:mkbdaa894d6ad23fff1845806df83a0d503059db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.183578   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508 ...
	I1205 06:05:27.183590   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508: {Name:mk09cc5dfb0995bfecb789c6278dbdd8b84a5e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.183663   18088 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt.3b67d508 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt
	I1205 06:05:27.183730   18088 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key.3b67d508 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key
	I1205 06:05:27.183779   18088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key
	I1205 06:05:27.183797   18088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt with IP's: []
	I1205 06:05:27.227119   18088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt ...
	I1205 06:05:27.227139   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt: {Name:mk215009d063d438246d7d87ead78d88c93adaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.227249   18088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key ...
	I1205 06:05:27.227259   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key: {Name:mk2d1e44ecb1c261d3bf6a4d1a94d448b238247b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:27.227425   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:05:27.227458   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:05:27.227489   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:05:27.227512   18088 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 06:05:27.228081   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:05:27.245136   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:05:27.260924   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:05:27.276455   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:05:27.291743   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 06:05:27.306969   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:05:27.322268   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:05:27.337589   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:05:27.352787   18088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:05:27.369858   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:05:27.380919   18088 ssh_runner.go:195] Run: openssl version
	I1205 06:05:27.386432   18088 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.392734   18088 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:05:27.401220   18088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.404504   18088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.404538   18088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:27.436912   18088 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:05:27.443512   18088 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:05:27.450121   18088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:05:27.453122   18088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:05:27.453162   18088 kubeadm.go:401] StartCluster: {Name:addons-177895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-177895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:27.453241   18088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:05:27.453283   18088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:05:27.477967   18088 cri.go:89] found id: ""
	I1205 06:05:27.478014   18088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:05:27.485110   18088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:05:27.492049   18088 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:05:27.492087   18088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:05:27.498795   18088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:05:27.498811   18088 kubeadm.go:158] found existing configuration files:
	
	I1205 06:05:27.498840   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:05:27.505489   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:05:27.505523   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:05:27.512026   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:05:27.518616   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:05:27.518651   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:05:27.525128   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:05:27.531864   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:05:27.531905   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:05:27.538182   18088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:05:27.544796   18088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:05:27.544845   18088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:05:27.551225   18088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:05:27.616853   18088 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1205 06:05:27.670668   18088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:05:36.352110   18088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 06:05:36.352184   18088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:05:36.352278   18088 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:05:36.352398   18088 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 06:05:36.352470   18088 kubeadm.go:319] OS: Linux
	I1205 06:05:36.352533   18088 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:05:36.352614   18088 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:05:36.352683   18088 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:05:36.352759   18088 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:05:36.352835   18088 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:05:36.352886   18088 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:05:36.352955   18088 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:05:36.353033   18088 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 06:05:36.353126   18088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:05:36.353215   18088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:05:36.353307   18088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:05:36.353388   18088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:05:36.354936   18088 out.go:252]   - Generating certificates and keys ...
	I1205 06:05:36.355003   18088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:05:36.355055   18088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:05:36.355120   18088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:05:36.355170   18088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:05:36.355218   18088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:05:36.355278   18088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:05:36.355367   18088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:05:36.355551   18088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-177895 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:05:36.355600   18088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:05:36.355794   18088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-177895 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:05:36.355905   18088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:05:36.355984   18088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:05:36.356057   18088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:05:36.356134   18088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:05:36.356180   18088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:05:36.356237   18088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:05:36.356295   18088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:05:36.356394   18088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:05:36.356445   18088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:05:36.356544   18088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:05:36.356647   18088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:05:36.357876   18088 out.go:252]   - Booting up control plane ...
	I1205 06:05:36.357981   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:05:36.358070   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:05:36.358159   18088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:05:36.358285   18088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:05:36.358419   18088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:05:36.358596   18088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:05:36.358718   18088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:05:36.358773   18088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:05:36.358939   18088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:05:36.359101   18088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:05:36.359198   18088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.783887ms
	I1205 06:05:36.359316   18088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 06:05:36.359431   18088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1205 06:05:36.359553   18088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 06:05:36.359674   18088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 06:05:36.359797   18088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.408286193s
	I1205 06:05:36.359888   18088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.975191548s
	I1205 06:05:36.359970   18088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50221364s
	I1205 06:05:36.360108   18088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 06:05:36.360276   18088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 06:05:36.360372   18088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 06:05:36.360645   18088 kubeadm.go:319] [mark-control-plane] Marking the node addons-177895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 06:05:36.360722   18088 kubeadm.go:319] [bootstrap-token] Using token: 77ksux.rxi4lc4qkr43phxu
	I1205 06:05:36.362707   18088 out.go:252]   - Configuring RBAC rules ...
	I1205 06:05:36.362837   18088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 06:05:36.362947   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 06:05:36.363140   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 06:05:36.363293   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 06:05:36.363449   18088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 06:05:36.363528   18088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 06:05:36.363681   18088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 06:05:36.363747   18088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 06:05:36.363798   18088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 06:05:36.363805   18088 kubeadm.go:319] 
	I1205 06:05:36.363865   18088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 06:05:36.363877   18088 kubeadm.go:319] 
	I1205 06:05:36.363935   18088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 06:05:36.363941   18088 kubeadm.go:319] 
	I1205 06:05:36.363961   18088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 06:05:36.364010   18088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 06:05:36.364058   18088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 06:05:36.364063   18088 kubeadm.go:319] 
	I1205 06:05:36.364107   18088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 06:05:36.364114   18088 kubeadm.go:319] 
	I1205 06:05:36.364153   18088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 06:05:36.364159   18088 kubeadm.go:319] 
	I1205 06:05:36.364202   18088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 06:05:36.364266   18088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 06:05:36.364349   18088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 06:05:36.364355   18088 kubeadm.go:319] 
	I1205 06:05:36.364428   18088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 06:05:36.364494   18088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 06:05:36.364501   18088 kubeadm.go:319] 
	I1205 06:05:36.364570   18088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 77ksux.rxi4lc4qkr43phxu \
	I1205 06:05:36.364654   18088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d \
	I1205 06:05:36.364678   18088 kubeadm.go:319] 	--control-plane 
	I1205 06:05:36.364687   18088 kubeadm.go:319] 
	I1205 06:05:36.364757   18088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 06:05:36.364764   18088 kubeadm.go:319] 
	I1205 06:05:36.364844   18088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 77ksux.rxi4lc4qkr43phxu \
	I1205 06:05:36.364952   18088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d 
	I1205 06:05:36.364964   18088 cni.go:84] Creating CNI manager for ""
	I1205 06:05:36.364969   18088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:05:36.366336   18088 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 06:05:36.367409   18088 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 06:05:36.371405   18088 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 06:05:36.371420   18088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 06:05:36.383261   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 06:05:36.571084   18088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:05:36.571158   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:36.571154   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-177895 minikube.k8s.io/updated_at=2025_12_05T06_05_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=addons-177895 minikube.k8s.io/primary=true
	I1205 06:05:36.636652   18088 ops.go:34] apiserver oom_adj: -16
	I1205 06:05:36.636757   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:37.137503   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:37.637870   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:38.136964   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:38.637611   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:39.136895   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:39.636940   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:40.137490   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:40.637565   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:41.137823   18088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:41.198752   18088 kubeadm.go:1114] duration metric: took 4.627657727s to wait for elevateKubeSystemPrivileges
	I1205 06:05:41.198786   18088 kubeadm.go:403] duration metric: took 13.745626755s to StartCluster
	I1205 06:05:41.198809   18088 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:41.198922   18088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:05:41.199284   18088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:41.199507   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 06:05:41.199530   18088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:41.199596   18088 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 06:05:41.199701   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:41.199727   18088 addons.go:70] Setting yakd=true in profile "addons-177895"
	I1205 06:05:41.199737   18088 addons.go:70] Setting inspektor-gadget=true in profile "addons-177895"
	I1205 06:05:41.199753   18088 addons.go:239] Setting addon inspektor-gadget=true in "addons-177895"
	I1205 06:05:41.199761   18088 addons.go:70] Setting volcano=true in profile "addons-177895"
	I1205 06:05:41.199766   18088 addons.go:70] Setting registry-creds=true in profile "addons-177895"
	I1205 06:05:41.199776   18088 addons.go:70] Setting volumesnapshots=true in profile "addons-177895"
	I1205 06:05:41.199782   18088 addons.go:239] Setting addon registry-creds=true in "addons-177895"
	I1205 06:05:41.199784   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199787   18088 addons.go:239] Setting addon volumesnapshots=true in "addons-177895"
	I1205 06:05:41.199776   18088 addons.go:70] Setting default-storageclass=true in profile "addons-177895"
	I1205 06:05:41.199806   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199819   18088 addons.go:70] Setting metrics-server=true in profile "addons-177895"
	I1205 06:05:41.199814   18088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-177895"
	I1205 06:05:41.199839   18088 addons.go:70] Setting cloud-spanner=true in profile "addons-177895"
	I1205 06:05:41.199826   18088 addons.go:70] Setting ingress=true in profile "addons-177895"
	I1205 06:05:41.199856   18088 addons.go:70] Setting registry=true in profile "addons-177895"
	I1205 06:05:41.199856   18088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-177895"
	I1205 06:05:41.199862   18088 addons.go:70] Setting gcp-auth=true in profile "addons-177895"
	I1205 06:05:41.199868   18088 addons.go:239] Setting addon registry=true in "addons-177895"
	I1205 06:05:41.199879   18088 addons.go:239] Setting addon ingress=true in "addons-177895"
	I1205 06:05:41.199883   18088 mustload.go:66] Loading cluster: addons-177895
	I1205 06:05:41.199887   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199938   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199956   18088 addons.go:70] Setting storage-provisioner=true in profile "addons-177895"
	I1205 06:05:41.199975   18088 addons.go:239] Setting addon storage-provisioner=true in "addons-177895"
	I1205 06:05:41.200001   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200126   18088 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:41.200276   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200303   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200316   18088 addons.go:70] Setting ingress-dns=true in profile "addons-177895"
	I1205 06:05:41.200345   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200346   18088 addons.go:239] Setting addon ingress-dns=true in "addons-177895"
	I1205 06:05:41.200382   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200418   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200463   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.200474   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199753   18088 addons.go:239] Setting addon yakd=true in "addons-177895"
	I1205 06:05:41.200742   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.201189   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199770   18088 addons.go:239] Setting addon volcano=true in "addons-177895"
	I1205 06:05:41.201308   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199848   18088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-177895"
	I1205 06:05:41.201384   18088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-177895"
	I1205 06:05:41.201437   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199827   18088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-177895"
	I1205 06:05:41.199855   18088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-177895"
	I1205 06:05:41.201505   18088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-177895"
	I1205 06:05:41.201555   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.199837   18088 addons.go:239] Setting addon metrics-server=true in "addons-177895"
	I1205 06:05:41.201601   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.200303   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.201850   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202054   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202083   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.202127   18088 out.go:179] * Verifying Kubernetes components...
	I1205 06:05:41.199816   18088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-177895"
	I1205 06:05:41.199851   18088 addons.go:239] Setting addon cloud-spanner=true in "addons-177895"
	I1205 06:05:41.202918   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.204054   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.199808   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.206940   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.207203   18088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-177895"
	I1205 06:05:41.207238   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.207417   18088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:41.207801   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.209807   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.210709   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.211500   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.265310   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.268534   18088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1205 06:05:41.268727   18088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1205 06:05:41.268752   18088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1205 06:05:41.270065   18088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:41.270412   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1205 06:05:41.270465   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.272446   18088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-177895"
	I1205 06:05:41.272546   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.273023   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.273098   18088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1205 06:05:41.273285   18088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:41.274911   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 06:05:41.274984   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.275803   18088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 06:05:41.275831   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 06:05:41.275875   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.281545   18088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:05:41.281608   18088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 06:05:41.283135   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 06:05:41.283154   18088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 06:05:41.283206   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.283451   18088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:41.283488   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:05:41.283542   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.295803   18088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1205 06:05:41.295803   18088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1205 06:05:41.295905   18088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1205 06:05:41.297060   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 06:05:41.297080   18088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 06:05:41.297155   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.297787   18088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:41.297800   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1205 06:05:41.297844   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.298087   18088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:41.298100   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1205 06:05:41.298140   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.296278   18088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 06:05:41.304109   18088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:41.304126   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 06:05:41.304137   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1205 06:05:41.304173   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.306482   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:41.307636   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 06:05:41.307731   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	W1205 06:05:41.311793   18088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 06:05:41.312474   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 06:05:41.312491   18088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 06:05:41.312552   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.312888   18088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:41.312903   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 06:05:41.312946   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.321225   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 06:05:41.324790   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 06:05:41.326862   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 06:05:41.328055   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 06:05:41.329315   18088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1205 06:05:41.330423   18088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:41.330437   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 06:05:41.330498   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.331264   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 06:05:41.332549   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 06:05:41.333835   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 06:05:41.335203   18088 addons.go:239] Setting addon default-storageclass=true in "addons-177895"
	I1205 06:05:41.338899   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:41.339395   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:41.340281   18088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 06:05:41.341345   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 06:05:41.341405   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 06:05:41.342120   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.345308   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.348804   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.348930   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.354562   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.355759   18088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 06:05:41.356878   18088 out.go:179]   - Using image docker.io/busybox:stable
	I1205 06:05:41.357801   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.358020   18088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 06:05:41.359340   18088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:41.361186   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 06:05:41.361273   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.361343   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.370433   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.383595   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.386697   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.393374   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.395304   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.402104   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.404792   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	W1205 06:05:41.406534   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.406563   18088 retry.go:31] will retry after 345.435491ms: ssh: handshake failed: EOF
	I1205 06:05:41.415651   18088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:41.415675   18088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:05:41.415735   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:41.426078   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:41.426605   18088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1205 06:05:41.426875   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.426899   18088 retry.go:31] will retry after 208.716502ms: ssh: handshake failed: EOF
	I1205 06:05:41.448505   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	W1205 06:05:41.449539   18088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:05:41.449561   18088 retry.go:31] will retry after 356.939619ms: ssh: handshake failed: EOF
	I1205 06:05:41.532949   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:41.535565   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:41.537623   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:41.554153   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:41.570535   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:41.576427   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 06:05:41.576453   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 06:05:41.578156   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 06:05:41.578193   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 06:05:41.578258   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 06:05:41.578278   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 06:05:41.582874   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:41.586579   18088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 06:05:41.586598   18088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 06:05:41.587519   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 06:05:41.587550   18088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 06:05:41.607733   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 06:05:41.607762   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 06:05:41.614774   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 06:05:41.614800   18088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 06:05:41.633789   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 06:05:41.633817   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 06:05:41.634155   18088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:41.634177   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 06:05:41.643620   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:41.654815   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 06:05:41.654847   18088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 06:05:41.659442   18088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:41.659479   18088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 06:05:41.660489   18088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 06:05:41.660510   18088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 06:05:41.687458   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 06:05:41.687485   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 06:05:41.689180   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:41.702281   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 06:05:41.702329   18088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 06:05:41.702995   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 06:05:41.703012   18088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 06:05:41.728808   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 06:05:41.728862   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 06:05:41.743191   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:41.750830   18088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:41.750857   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 06:05:41.761553   18088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:41.761588   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 06:05:41.767450   18088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 06:05:41.767473   18088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 06:05:41.775765   18088 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 06:05:41.778655   18088 node_ready.go:35] waiting up to 6m0s for node "addons-177895" to be "Ready" ...
	I1205 06:05:41.806109   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:41.811921   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:41.829341   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 06:05:41.829365   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 06:05:41.860872   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:41.886479   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 06:05:41.886503   18088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 06:05:41.929309   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 06:05:41.929345   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 06:05:41.964232   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 06:05:41.964256   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 06:05:42.020916   18088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:42.021008   18088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 06:05:42.026954   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:42.029563   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:42.060630   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:42.285880   18088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-177895" context rescaled to 1 replicas
	I1205 06:05:42.716541   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.183551814s)
	I1205 06:05:42.716587   18088 addons.go:495] Verifying addon ingress=true in "addons-177895"
	I1205 06:05:42.716618   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.181004601s)
	I1205 06:05:42.716725   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.162548412s)
	I1205 06:05:42.716800   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.146211823s)
	I1205 06:05:42.716830   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.133936s)
	I1205 06:05:42.716667   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.179026456s)
	I1205 06:05:42.716920   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.073272676s)
	I1205 06:05:42.716973   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.027763843s)
	I1205 06:05:42.716988   18088 addons.go:495] Verifying addon registry=true in "addons-177895"
	I1205 06:05:42.717097   18088 addons.go:495] Verifying addon metrics-server=true in "addons-177895"
	I1205 06:05:42.718077   18088 out.go:179] * Verifying registry addon...
	I1205 06:05:42.718089   18088 out.go:179] * Verifying ingress addon...
	I1205 06:05:42.720564   18088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-177895 service yakd-dashboard -n yakd-dashboard
	
	I1205 06:05:42.722058   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 06:05:42.722059   18088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 06:05:42.724875   18088 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:05:42.724942   18088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:05:42.724957   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.181137   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.369170367s)
	W1205 06:05:43.181195   18088 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:05:43.181221   18088 retry.go:31] will retry after 181.285683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:05:43.181288   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.320341455s)
	I1205 06:05:43.181515   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.154543642s)
	I1205 06:05:43.181629   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.152041061s)
	I1205 06:05:43.181828   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.121160666s)
	I1205 06:05:43.181846   18088 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-177895"
	I1205 06:05:43.183267   18088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1205 06:05:43.185794   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 06:05:43.188377   18088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:05:43.188395   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1205 06:05:43.188398   18088 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1205 06:05:43.288680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.288799   18088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:05:43.288817   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:43.362924   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:43.688581   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:43.724448   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:43.724756   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:43.781372   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:44.189444   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:44.224243   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:44.224421   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:44.688228   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:44.724134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:44.724370   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.189134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:45.225406   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:45.225459   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.688357   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:45.724746   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:45.724810   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:45.785970   18088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423011549s)
	I1205 06:05:46.188837   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:46.224836   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:46.225056   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:46.280404   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:46.687984   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:46.724781   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:46.724852   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:47.188631   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:47.224629   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:47.224720   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:47.689075   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:47.726198   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:47.726426   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:48.188589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:48.224548   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:48.224550   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:48.280776   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:48.688180   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:48.723766   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:48.724032   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:48.878250   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 06:05:48.878304   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:48.896081   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:49.007062   18088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 06:05:49.018567   18088 addons.go:239] Setting addon gcp-auth=true in "addons-177895"
	I1205 06:05:49.018638   18088 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:05:49.018965   18088 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:05:49.035660   18088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 06:05:49.035708   18088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:05:49.052289   18088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:05:49.146036   18088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:49.147039   18088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 06:05:49.148019   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 06:05:49.148033   18088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 06:05:49.159647   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 06:05:49.159665   18088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 06:05:49.171016   18088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:05:49.171036   18088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 06:05:49.182341   18088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:05:49.188757   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:49.224963   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:49.225133   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:49.458781   18088 addons.go:495] Verifying addon gcp-auth=true in "addons-177895"
	I1205 06:05:49.460141   18088 out.go:179] * Verifying gcp-auth addon...
	I1205 06:05:49.462277   18088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 06:05:49.464121   18088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 06:05:49.464138   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:49.688926   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:49.724673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:49.724929   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:49.965123   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:50.188585   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:50.224599   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:50.224745   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:50.281258   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:50.464700   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:50.689373   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:50.724164   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:50.724374   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:50.965002   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:51.188622   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:51.224542   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:51.224707   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:51.465141   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:51.688496   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:51.724290   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:51.724451   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:51.965051   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:52.188550   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:52.224598   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:52.224737   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:52.281549   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:52.464718   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:52.689241   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:52.723881   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:52.724138   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:52.965023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:53.188269   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:53.224130   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:53.224272   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:53.465225   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:53.688505   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:53.724765   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:53.724882   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:53.964423   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:54.188922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:54.224714   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:54.224952   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:54.464533   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:54.688983   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:54.725060   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:54.725096   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:54.780749   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:54.965605   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:55.189150   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:55.225364   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:55.225446   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:55.464448   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:55.688882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:55.725599   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:55.725804   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:55.965366   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:56.188855   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:56.224877   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:56.224967   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:56.464847   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:56.688421   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:56.724349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:56.724404   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:56.781118   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:56.965154   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:57.188672   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:57.224619   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:57.224772   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:57.464663   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:57.689073   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:57.725142   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:57.725184   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:57.965066   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:58.188463   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:58.224349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:58.224589   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:58.465455   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:58.689236   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:58.723999   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:58.724102   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:58.964934   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:59.188163   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:59.225154   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:59.225313   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:05:59.281221   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:05:59.464466   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:05:59.688713   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:05:59.724666   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:05:59.724683   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:05:59.964307   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:00.188991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:00.224882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:00.225159   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.464933   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:00.688434   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:00.724233   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:00.724416   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.965240   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:01.189057   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:01.225191   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.225264   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:01.281544   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:01.465044   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:01.688564   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:01.724649   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.724730   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:01.965206   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:02.188618   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.224733   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.224923   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.465004   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:02.688887   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.724808   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.724955   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.964612   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.189021   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.225128   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.225214   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:03.465295   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.688912   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.724965   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.725024   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:03.781024   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:03.965065   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.188705   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.224474   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.224632   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.464376   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.688911   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.724818   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.724930   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.964975   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.188165   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.223959   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:05.224220   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.465285   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.688680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.724872   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.724876   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:06:05.781466   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:05.964705   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.188022   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.224859   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.224998   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.465515   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.689059   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.725032   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.725155   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.965046   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.188520   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.224518   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.224665   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.464956   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.688248   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.723982   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.724133   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.965169   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.188262   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.224177   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.224229   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:08.280877   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:08.465437   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.688688   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.724743   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.724847   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:08.964647   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.189023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.225071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.225195   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.465185   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.688593   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.724588   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.724613   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.964589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.188776   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.224707   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.224832   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:10.464410   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.688730   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.724641   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.724848   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:10.780501   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:10.964704   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.188830   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.224955   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.225139   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.465316   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.688673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.724799   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.724954   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.964852   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.188264   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.224008   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.224244   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:12.465225   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.688724   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.724741   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.724951   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:12.780672   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:12.964953   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.188227   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.224055   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.224156   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.464959   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.688209   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.724259   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.724263   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.965204   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.188530   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.224533   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.224663   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.464737   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.688007   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.724916   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.725077   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.965018   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.188449   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.224432   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.224610   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:15.281355   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:15.464589   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.688956   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.724781   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.724982   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:15.964777   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.188023   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.225136   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.225167   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.465693   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.688991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.724896   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.724927   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.964843   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.188132   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.224071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.224235   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:17.464661   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.689056   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.725134   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.725160   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:17.781130   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:17.964490   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.188680   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.224437   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.224606   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.464501   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.689003   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.728724   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.728787   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.965102   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.188298   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.224196   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.224395   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.465440   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.688928   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.724796   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.724900   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.964900   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.188144   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.225062   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.225100   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:20.280919   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:20.465399   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.688834   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.724874   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.725061   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:20.965076   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.188546   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.224691   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.224738   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.465313   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.688644   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.724788   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.724998   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.964934   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.188220   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.224071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.224267   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:06:22.281035   18088 node_ready.go:57] node "addons-177895" has "Ready":"False" status (will retry)
	I1205 06:06:22.465543   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.690483   18088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:06:22.690512   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.725450   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:22.726650   18088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:06:22.726673   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.780601   18088 node_ready.go:49] node "addons-177895" is "Ready"
	I1205 06:06:22.780624   18088 node_ready.go:38] duration metric: took 41.00193939s for node "addons-177895" to be "Ready" ...
	I1205 06:06:22.780636   18088 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:06:22.780675   18088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:06:22.798069   18088 api_server.go:72] duration metric: took 41.598504915s to wait for apiserver process to appear ...
	I1205 06:06:22.798096   18088 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:06:22.798123   18088 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 06:06:22.804098   18088 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 06:06:22.804848   18088 api_server.go:141] control plane version: v1.34.2
	I1205 06:06:22.804869   18088 api_server.go:131] duration metric: took 6.764721ms to wait for apiserver health ...
	I1205 06:06:22.804876   18088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:06:22.808144   18088 system_pods.go:59] 20 kube-system pods found
	I1205 06:06:22.808173   18088 system_pods.go:61] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:22.808180   18088 system_pods.go:61] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:22.808188   18088 system_pods.go:61] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:22.808194   18088 system_pods.go:61] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:22.808203   18088 system_pods.go:61] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:22.808208   18088 system_pods.go:61] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:22.808215   18088 system_pods.go:61] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:22.808218   18088 system_pods.go:61] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:22.808224   18088 system_pods.go:61] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:22.808229   18088 system_pods.go:61] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:22.808236   18088 system_pods.go:61] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:22.808239   18088 system_pods.go:61] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:22.808244   18088 system_pods.go:61] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:22.808249   18088 system_pods.go:61] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:22.808257   18088 system_pods.go:61] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:22.808262   18088 system_pods.go:61] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:22.808269   18088 system_pods.go:61] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:22.808274   18088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.808282   18088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.808287   18088 system_pods.go:61] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:22.808294   18088 system_pods.go:74] duration metric: took 3.41374ms to wait for pod list to return data ...
	I1205 06:06:22.808301   18088 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:06:22.809932   18088 default_sa.go:45] found service account: "default"
	I1205 06:06:22.809947   18088 default_sa.go:55] duration metric: took 1.639388ms for default service account to be created ...
	I1205 06:06:22.809954   18088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:06:22.812757   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:22.812786   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:22.812796   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:22.812809   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:22.812820   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:22.812829   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:22.812838   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:22.812848   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:22.812857   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:22.812866   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:22.812878   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:22.812886   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:22.812891   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:22.812903   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:22.812914   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:22.812931   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:22.812943   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:22.812951   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:22.812962   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.812974   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:22.812982   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:22.813000   18088 retry.go:31] will retry after 244.609337ms: missing components: kube-dns
	I1205 06:06:22.967015   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.068213   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.068253   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.068266   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:23.068277   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.068285   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.068293   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.068299   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.068312   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.068336   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.068347   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.068356   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.068361   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.068368   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.068377   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.068387   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.068395   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.068404   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.068412   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.068425   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.068434   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.068446   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:23.068467   18088 retry.go:31] will retry after 243.523752ms: missing components: kube-dns
	I1205 06:06:23.190510   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.225813   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.225854   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.327844   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.327882   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.327893   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:06:23.327904   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.327912   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.327921   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.327930   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.327937   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.327946   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.327953   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.327965   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.327970   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.327976   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.327987   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.327999   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.328010   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.328018   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.328026   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.328036   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.328044   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.328064   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:06:23.328083   18088 retry.go:31] will retry after 405.070616ms: missing components: kube-dns
	I1205 06:06:23.465719   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.688989   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.726689   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.726891   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.737349   18088 system_pods.go:86] 20 kube-system pods found
	I1205 06:06:23.737373   18088 system_pods.go:89] "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:06:23.737379   18088 system_pods.go:89] "coredns-66bc5c9577-xlfl4" [fca7fb2d-3a9c-4281-8f88-7427ed346cbd] Running
	I1205 06:06:23.737386   18088 system_pods.go:89] "csi-hostpath-attacher-0" [a83298ac-7851-4a3b-927e-367a9d031cdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:06:23.737394   18088 system_pods.go:89] "csi-hostpath-resizer-0" [18e71be9-8902-4c50-94e3-01ad80da8abc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:06:23.737400   18088 system_pods.go:89] "csi-hostpathplugin-gm8fx" [e588dfd7-6485-4158-b44f-7e5e5b742036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 06:06:23.737407   18088 system_pods.go:89] "etcd-addons-177895" [252d20a0-beef-497d-98ca-a861b06516c6] Running
	I1205 06:06:23.737411   18088 system_pods.go:89] "kindnet-n79ts" [b626c676-0b57-479a-8b6d-784cf0ffaa23] Running
	I1205 06:06:23.737417   18088 system_pods.go:89] "kube-apiserver-addons-177895" [fe9497b8-5686-412c-ada1-5922bed2e5e8] Running
	I1205 06:06:23.737420   18088 system_pods.go:89] "kube-controller-manager-addons-177895" [72fc0c5c-3be3-4fad-bdf3-4fca1da839dc] Running
	I1205 06:06:23.737429   18088 system_pods.go:89] "kube-ingress-dns-minikube" [fede7f44-4af6-4d0a-a25b-764dd3bae9b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:06:23.737436   18088 system_pods.go:89] "kube-proxy-gk8dq" [403c7d4a-8858-408b-88a3-3b59056a6db8] Running
	I1205 06:06:23.737440   18088 system_pods.go:89] "kube-scheduler-addons-177895" [827c6197-9bb4-488e-99c6-0ffd004a8d3e] Running
	I1205 06:06:23.737448   18088 system_pods.go:89] "metrics-server-85b7d694d7-7cspb" [47c84767-ce03-48d5-bb27-2d49ee685509] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:06:23.737454   18088 system_pods.go:89] "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:06:23.737461   18088 system_pods.go:89] "registry-6b586f9694-hcpm2" [11683fd4-3c9a-429e-ae25-4d15113f118b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:06:23.737468   18088 system_pods.go:89] "registry-creds-764b6fb674-8p8pq" [8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:06:23.737475   18088 system_pods.go:89] "registry-proxy-gzlfd" [5b249ccc-148a-4c35-95c5-f042289920f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:06:23.737480   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5g82" [8b9afade-56f5-4719-af5a-be801e40a504] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.737490   18088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h9khj" [5e8b27bf-14d3-4269-ab6f-7e236482cb3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:06:23.737497   18088 system_pods.go:89] "storage-provisioner" [866f597a-b240-4a0b-8f9c-d1604ca66331] Running
	I1205 06:06:23.737504   18088 system_pods.go:126] duration metric: took 927.545087ms to wait for k8s-apps to be running ...
	I1205 06:06:23.737513   18088 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:06:23.737550   18088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:06:23.774226   18088 system_svc.go:56] duration metric: took 36.702127ms WaitForService to wait for kubelet
	I1205 06:06:23.774254   18088 kubeadm.go:587] duration metric: took 42.574692838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:06:23.774276   18088 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:06:23.776970   18088 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 06:06:23.776996   18088 node_conditions.go:123] node cpu capacity is 8
	I1205 06:06:23.777016   18088 node_conditions.go:105] duration metric: took 2.734091ms to run NodePressure ...
	I1205 06:06:23.777031   18088 start.go:242] waiting for startup goroutines ...
	I1205 06:06:23.966175   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.189377   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.289995   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.290205   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.466000   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.689188   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.790560   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.790672   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.966060   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.189809   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.225277   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.225398   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.465035   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.689145   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.790071   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.790145   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.965492   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.189802   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.225258   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.225354   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.464713   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.688699   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.789631   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.789669   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.965090   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.190029   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.225754   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.225792   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.465577   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.689637   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.725212   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.725230   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.964980   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.189269   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.225932   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.226015   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.465671   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.689654   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.790832   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.791031   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.965506   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.190149   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.225579   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.225613   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.465231   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.688922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.726777   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.727054   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.966086   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.189016   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.227163   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.227497   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.466894   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.690256   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.727477   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.728469   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.965461   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.190105   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.225624   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:31.225646   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.465480   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.689591   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.725194   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:31.725218   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.965991   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.188717   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.225483   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:32.225547   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.465466   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.689490   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.725013   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:32.725174   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.964913   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.189433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.225131   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:33.225160   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.465877   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.689070   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.725962   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:33.726005   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.966201   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.189340   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.290170   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:34.290200   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.466178   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.689999   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.725075   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:34.725297   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.965059   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.189145   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.225881   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.225897   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:35.465609   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.689808   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.725316   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.725462   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:35.965179   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.189654   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.290344   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:36.290605   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.464817   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.688559   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.724644   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.724644   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:36.965593   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.189904   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.225113   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:37.225244   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.466887   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.688618   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.724942   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:37.725079   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.965826   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.188848   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.225256   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:38.225262   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.464883   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.688552   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.724547   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:38.724709   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.965969   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.189569   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.224433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:39.224489   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.464847   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.689663   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.725096   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:39.725166   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.966137   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.190011   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.226002   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:40.226169   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.465859   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.688904   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.725425   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:40.725515   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.964882   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.189433   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.224816   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:41.224987   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.465903   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.688529   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.724193   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:41.724403   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.964976   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.188993   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.225510   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.225621   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:42.464909   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.688607   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.724786   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:42.724883   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.965635   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.189425   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.224537   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:43.224547   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.464922   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.688975   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.725563   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.725601   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:43.964914   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.191160   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.227365   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:44.227477   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.465556   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.689658   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.724957   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:44.725037   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.965725   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.190287   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.225468   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:45.225623   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.465294   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.689638   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.725038   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:45.725077   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.966127   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.189542   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.225158   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:46.225200   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.465480   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.689539   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.724475   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:46.724623   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.965896   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.189082   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.226127   18088 kapi.go:107] duration metric: took 1m4.504066087s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 06:06:47.226175   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:47.465752   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.689142   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.725933   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.074111   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.189349   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.289385   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.464755   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.690091   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.725412   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.964726   18088 kapi.go:107] duration metric: took 59.502442894s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 06:06:48.966373   18088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-177895 cluster.
	I1205 06:06:48.967586   18088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 06:06:48.968679   18088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 06:06:49.189497   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.225657   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:49.689137   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.725805   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.189459   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.226209   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.689186   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.725301   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.189563   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.224865   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.688917   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.725446   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.190252   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.225853   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.688569   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.725278   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.189769   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.225102   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.689215   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.725684   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.188801   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.225021   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.689210   18088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.789424   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.190989   18088 kapi.go:107] duration metric: took 1m12.005191342s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 06:06:55.225845   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.784304   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.225776   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.725616   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.289450   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.725889   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.225180   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.725272   18088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:59.225220   18088 kapi.go:107] duration metric: took 1m16.503160217s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 06:06:59.226732   18088 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, metrics-server, yakd, cloud-spanner, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1205 06:06:59.227757   18088 addons.go:530] duration metric: took 1m18.028159047s for enable addons: enabled=[registry-creds inspektor-gadget storage-provisioner amd-gpu-device-plugin nvidia-device-plugin ingress-dns metrics-server yakd cloud-spanner storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1205 06:06:59.227793   18088 start.go:247] waiting for cluster config update ...
	I1205 06:06:59.227812   18088 start.go:256] writing updated cluster config ...
	I1205 06:06:59.228043   18088 ssh_runner.go:195] Run: rm -f paused
	I1205 06:06:59.231862   18088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:06:59.234553   18088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xlfl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.237718   18088 pod_ready.go:94] pod "coredns-66bc5c9577-xlfl4" is "Ready"
	I1205 06:06:59.237737   18088 pod_ready.go:86] duration metric: took 3.165751ms for pod "coredns-66bc5c9577-xlfl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.239383   18088 pod_ready.go:83] waiting for pod "etcd-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.242489   18088 pod_ready.go:94] pod "etcd-addons-177895" is "Ready"
	I1205 06:06:59.242511   18088 pod_ready.go:86] duration metric: took 3.110544ms for pod "etcd-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.243973   18088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.246910   18088 pod_ready.go:94] pod "kube-apiserver-addons-177895" is "Ready"
	I1205 06:06:59.246931   18088 pod_ready.go:86] duration metric: took 2.94163ms for pod "kube-apiserver-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.248480   18088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.635043   18088 pod_ready.go:94] pod "kube-controller-manager-addons-177895" is "Ready"
	I1205 06:06:59.635069   18088 pod_ready.go:86] duration metric: took 386.573508ms for pod "kube-controller-manager-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:06:59.856651   18088 pod_ready.go:83] waiting for pod "kube-proxy-gk8dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.235610   18088 pod_ready.go:94] pod "kube-proxy-gk8dq" is "Ready"
	I1205 06:07:00.235634   18088 pod_ready.go:86] duration metric: took 378.957923ms for pod "kube-proxy-gk8dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.435431   18088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.835115   18088 pod_ready.go:94] pod "kube-scheduler-addons-177895" is "Ready"
	I1205 06:07:00.835139   18088 pod_ready.go:86] duration metric: took 399.686441ms for pod "kube-scheduler-addons-177895" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:07:00.835150   18088 pod_ready.go:40] duration metric: took 1.603261281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:07:00.877671   18088 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 06:07:00.879692   18088 out.go:179] * Done! kubectl is now configured to use "addons-177895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:06:58 addons-177895 crio[774]: time="2025-12-05T06:06:58.461289433Z" level=info msg="Starting container: e6340ed626c1a2983869e52d9035bf0f932a70e95bdd9af5d5eaaff4bae63a66" id=69bf068b-fe36-4335-afae-a7e283b1dfff name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:06:58 addons-177895 crio[774]: time="2025-12-05T06:06:58.463081791Z" level=info msg="Started container" PID=5819 containerID=e6340ed626c1a2983869e52d9035bf0f932a70e95bdd9af5d5eaaff4bae63a66 description=ingress-nginx/ingress-nginx-controller-6c8bf45fb-8r9xg/controller id=69bf068b-fe36-4335-afae-a7e283b1dfff name=/runtime.v1.RuntimeService/StartContainer sandboxID=31830ffd539a5a83d3e28f882080b72fb31eef38f8b304060b0120b7ea1ea3be
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.740610646Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6e4b84c8-5db2-48fb-8fa9-893e218346fe name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.74067721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.74615824Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa38144f2c172b66affa802ef273ee9096f9ec0ab1c33359312c7ca36d5eda25 UID:815ba021-005d-4a49-9b68-12ac2d4fd4bc NetNS:/var/run/netns/857c14bb-cda1-4444-90be-968dfe17db33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000bc270}] Aliases:map[]}"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.746182787Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.755488914Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa38144f2c172b66affa802ef273ee9096f9ec0ab1c33359312c7ca36d5eda25 UID:815ba021-005d-4a49-9b68-12ac2d4fd4bc NetNS:/var/run/netns/857c14bb-cda1-4444-90be-968dfe17db33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000bc270}] Aliases:map[]}"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.75560282Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.756530299Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.7572254Z" level=info msg="Ran pod sandbox aa38144f2c172b66affa802ef273ee9096f9ec0ab1c33359312c7ca36d5eda25 with infra container: default/busybox/POD" id=6e4b84c8-5db2-48fb-8fa9-893e218346fe name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.758185042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5981bd5-2d65-4075-aa7e-12324a3a695d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.75829877Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e5981bd5-2d65-4075-aa7e-12324a3a695d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.75837176Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e5981bd5-2d65-4075-aa7e-12324a3a695d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.758930704Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=975663fc-da2b-49aa-b300-a9123be27868 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:07:01 addons-177895 crio[774]: time="2025-12-05T06:07:01.760227967Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.390566203Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=975663fc-da2b-49aa-b300-a9123be27868 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.391081341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4fe554e2-6e1c-4e83-9089-17c4e9bd4533 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.392255293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd382e9d-e772-4de3-a88a-db4132b7b7d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.395314205Z" level=info msg="Creating container: default/busybox/busybox" id=462805b9-7699-4eb2-bd55-a83c7ca8a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.395414926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.400734492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.401195335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.428351357Z" level=info msg="Created container a1b8cb88b9e017f6c3553b7c794cd5db25a6d18351d5c51c31b22c906bf7dfa1: default/busybox/busybox" id=462805b9-7699-4eb2-bd55-a83c7ca8a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.428817435Z" level=info msg="Starting container: a1b8cb88b9e017f6c3553b7c794cd5db25a6d18351d5c51c31b22c906bf7dfa1" id=7253ecc9-4035-4c73-b1f4-c9ab46da6f7d name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:07:02 addons-177895 crio[774]: time="2025-12-05T06:07:02.430422983Z" level=info msg="Started container" PID=6213 containerID=a1b8cb88b9e017f6c3553b7c794cd5db25a6d18351d5c51c31b22c906bf7dfa1 description=default/busybox/busybox id=7253ecc9-4035-4c73-b1f4-c9ab46da6f7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa38144f2c172b66affa802ef273ee9096f9ec0ab1c33359312c7ca36d5eda25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a1b8cb88b9e01       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   aa38144f2c172       busybox                                    default
	e6340ed626c1a       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             11 seconds ago       Running             controller                               0                   31830ffd539a5       ingress-nginx-controller-6c8bf45fb-8r9xg   ingress-nginx
	16645d5e8e337       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          15 seconds ago       Running             csi-snapshotter                          0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	819ee604de0dc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	7897ed230bdcb       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	bd0232ddd5627       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	07cc0f5510b04       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             18 seconds ago       Exited              patch                                    2                   a00c1487e3639       ingress-nginx-admission-patch-98kcw        ingress-nginx
	71790cf94f6b0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            19 seconds ago       Running             gadget                                   0                   302816952d837       gadget-gb572                               gadget
	d658de91425e0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	ac7e74d074bd9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 22 seconds ago       Running             gcp-auth                                 0                   40c735c75f040       gcp-auth-78565c9fb4-jpdgf                  gcp-auth
	4c91c5eca3759       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   8cd6deb660f5e       registry-proxy-gzlfd                       kube-system
	b1cef4ce17c14       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     24 seconds ago       Running             nvidia-device-plugin-ctr                 0                   884b5e57aaf60       nvidia-device-plugin-daemonset-vqq7b       kube-system
	3bcfb73c2da0e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   27 seconds ago       Running             csi-external-health-monitor-controller   0                   496e7cb1ad388       csi-hostpathplugin-gm8fx                   kube-system
	320976162b2e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   28 seconds ago       Exited              create                                   0                   79019fa95f2a2       ingress-nginx-admission-create-756km       ingress-nginx
	32a622051217c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              28 seconds ago       Running             yakd                                     0                   858664db72ceb       yakd-dashboard-5ff678cb9-qdmqt             yakd-dashboard
	1daa53d0ceb64       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago       Running             csi-attacher                             0                   267b0b56ad3bb       csi-hostpath-attacher-0                    kube-system
	a1990665675a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     32 seconds ago       Running             amd-gpu-device-plugin                    0                   dd4b53e83dc69       amd-gpu-device-plugin-tff2n                kube-system
	6842b5883d35e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              patch                                    0                   a57810f55c10a       gcp-auth-certs-patch-m7wbx                 gcp-auth
	32921b8595d6e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   8cc12f4d28a93       snapshot-controller-7d9fbc56b8-h9khj       kube-system
	eb0b423a0f9af       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   34 seconds ago       Exited              create                                   0                   bc8a768c5056d       gcp-auth-certs-create-8tkdd                gcp-auth
	0be783dd8c5fd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      35 seconds ago       Running             volume-snapshot-controller               0                   27bf4ff3feb66       snapshot-controller-7d9fbc56b8-d5g82       kube-system
	f88019728f44c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               36 seconds ago       Running             minikube-ingress-dns                     0                   cf025d22f2de6       kube-ingress-dns-minikube                  kube-system
	6e7946313d15a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              40 seconds ago       Running             csi-resizer                              0                   2ee456bf5aac8       csi-hostpath-resizer-0                     kube-system
	e90447f36f9cc       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               41 seconds ago       Running             cloud-spanner-emulator                   0                   62f39f9b416ef       cloud-spanner-emulator-5bdddb765-7zxgt     default
	0038f726928bc       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             43 seconds ago       Running             local-path-provisioner                   0                   6bb7f9d40c09d       local-path-provisioner-648f6765c9-kq9cd    local-path-storage
	bc1820c39f391       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           44 seconds ago       Running             registry                                 0                   a5ba51b34dce7       registry-6b586f9694-hcpm2                  kube-system
	eae7b2e3083fc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        46 seconds ago       Running             metrics-server                           0                   8b5a3431e2e8d       metrics-server-85b7d694d7-7cspb            kube-system
	939f9276ecdd3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             47 seconds ago       Running             coredns                                  0                   d292dce4695d3       coredns-66bc5c9577-xlfl4                   kube-system
	fae790e0ec5bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             47 seconds ago       Running             storage-provisioner                      0                   9e69085c7c02c       storage-provisioner                        kube-system
	e2c0cd58d28ef       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   4033e9af17298       kube-proxy-gk8dq                           kube-system
	36b03b6292161       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   103bb854ba207       kindnet-n79ts                              kube-system
	d693c2ca57323       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   0983dd47daf69       kube-scheduler-addons-177895               kube-system
	88d316347724e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   09acc882071fe       kube-controller-manager-addons-177895      kube-system
	7e02812d9d790       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   7118545e98873       kube-apiserver-addons-177895               kube-system
	a744380007274       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   3070a2e0c5a0a       etcd-addons-177895                         kube-system
	
	
	==> coredns [939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7] <==
	[INFO] 10.244.0.10:44386 - 9206 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134777s
	[INFO] 10.244.0.10:35459 - 21713 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118482s
	[INFO] 10.244.0.10:35459 - 21438 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163425s
	[INFO] 10.244.0.10:48195 - 35009 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000069487s
	[INFO] 10.244.0.10:48195 - 35262 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000080314s
	[INFO] 10.244.0.10:46330 - 13614 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000044729s
	[INFO] 10.244.0.10:46330 - 13355 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000105307s
	[INFO] 10.244.0.10:37827 - 61410 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000051051s
	[INFO] 10.244.0.10:37827 - 61616 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000090065s
	[INFO] 10.244.0.10:45905 - 42864 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092155s
	[INFO] 10.244.0.10:45905 - 42595 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108461s
	[INFO] 10.244.0.20:33416 - 54348 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181554s
	[INFO] 10.244.0.20:45060 - 3664 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254265s
	[INFO] 10.244.0.20:39778 - 38064 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114028s
	[INFO] 10.244.0.20:54833 - 49844 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134931s
	[INFO] 10.244.0.20:42555 - 56480 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085653s
	[INFO] 10.244.0.20:33517 - 54082 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013656s
	[INFO] 10.244.0.20:45274 - 50201 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004916331s
	[INFO] 10.244.0.20:58363 - 33604 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005005132s
	[INFO] 10.244.0.20:33925 - 25611 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005222118s
	[INFO] 10.244.0.20:40134 - 53759 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00532459s
	[INFO] 10.244.0.20:39333 - 9813 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003914847s
	[INFO] 10.244.0.20:38762 - 42859 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004132652s
	[INFO] 10.244.0.20:56223 - 59393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001080804s
	[INFO] 10.244.0.20:40687 - 12728 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002015257s
	
	
	==> describe nodes <==
	Name:               addons-177895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-177895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=addons-177895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_05_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-177895
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-177895"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:05:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-177895
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:07:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:07:07 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:07:07 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:07:07 +0000   Fri, 05 Dec 2025 06:05:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:07:07 +0000   Fri, 05 Dec 2025 06:06:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-177895
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c5b2c12d-676e-4624-9c30-d03b99e0eb27
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5bdddb765-7zxgt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  gadget                      gadget-gb572                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gcp-auth                    gcp-auth-78565c9fb4-jpdgf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-8r9xg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         88s
	  kube-system                 amd-gpu-device-plugin-tff2n                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-66bc5c9577-xlfl4                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-gm8fx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-177895                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-n79ts                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-addons-177895                250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-177895       200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-gk8dq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-addons-177895                100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-7cspb             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 nvidia-device-plugin-daemonset-vqq7b        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-6b586f9694-hcpm2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-creds-764b6fb674-8p8pq             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-gzlfd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-7d9fbc56b8-d5g82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-h9khj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  local-path-storage          local-path-provisioner-648f6765c9-kq9cd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qdmqt              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 87s   kube-proxy       
	  Normal  Starting                 95s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s   kubelet          Node addons-177895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s   kubelet          Node addons-177895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s   kubelet          Node addons-177895 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s   node-controller  Node addons-177895 event: Registered Node addons-177895 in Controller
	  Normal  NodeReady                48s   kubelet          Node addons-177895 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001880] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.357685] i8042: Warning: Keylock active
	[  +0.011462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.477051] block sda: the capability attribute has been deprecated.
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0] <==
	{"level":"warn","ts":"2025-12-05T06:05:32.902964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.908949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.915628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.921735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.927811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.934802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.958409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.964631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:32.972114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:33.015264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:05:43.548055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.426714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.435643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.450345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:06:10.456627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43082","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:06:38.948986Z","caller":"traceutil/trace.go:172","msg":"trace[1948693037] linearizableReadLoop","detail":"{readStateIndex:1085; appliedIndex:1085; }","duration":"119.98788ms","start":"2025-12-05T06:06:38.828984Z","end":"2025-12-05T06:06:38.948972Z","steps":["trace[1948693037] 'read index received'  (duration: 119.983333ms)","trace[1948693037] 'applied index is now lower than readState.Index'  (duration: 3.924µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T06:06:38.949483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.480851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-8tkdd\" limit:1 ","response":"range_response_count:1 size:4260"}
	{"level":"info","ts":"2025-12-05T06:06:38.949551Z","caller":"traceutil/trace.go:172","msg":"trace[378412408] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-8tkdd; range_end:; response_count:1; response_revision:1054; }","duration":"120.564055ms","start":"2025-12-05T06:06:38.828976Z","end":"2025-12-05T06:06:38.949540Z","steps":["trace[378412408] 'agreement among raft nodes before linearized reading'  (duration: 120.072916ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949572Z","caller":"traceutil/trace.go:172","msg":"trace[1644745860] transaction","detail":"{read_only:false; response_revision:1056; number_of_response:1; }","duration":"156.218749ms","start":"2025-12-05T06:06:38.793349Z","end":"2025-12-05T06:06:38.949568Z","steps":["trace[1644745860] 'process raft request'  (duration: 156.130092ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949566Z","caller":"traceutil/trace.go:172","msg":"trace[1332123197] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"162.495983ms","start":"2025-12-05T06:06:38.787061Z","end":"2025-12-05T06:06:38.949557Z","steps":["trace[1332123197] 'process raft request'  (duration: 162.036308ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:38.949546Z","caller":"traceutil/trace.go:172","msg":"trace[1112850690] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"121.191274ms","start":"2025-12-05T06:06:38.828344Z","end":"2025-12-05T06:06:38.949536Z","steps":["trace[1112850690] 'process raft request'  (duration: 121.166424ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:06:48.072587Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.111713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:06:48.072733Z","caller":"traceutil/trace.go:172","msg":"trace[1232396964] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"108.262925ms","start":"2025-12-05T06:06:47.964452Z","end":"2025-12-05T06:06:48.072715Z","steps":["trace[1232396964] 'range keys from in-memory index tree'  (duration: 108.058193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:06:55.782735Z","caller":"traceutil/trace.go:172","msg":"trace[1034833694] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"107.501655ms","start":"2025-12-05T06:06:55.675210Z","end":"2025-12-05T06:06:55.782712Z","steps":["trace[1034833694] 'process raft request'  (duration: 107.316883ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:07:02.986496Z","caller":"traceutil/trace.go:172","msg":"trace[1413813968] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"100.098889ms","start":"2025-12-05T06:07:02.886383Z","end":"2025-12-05T06:07:02.986482Z","steps":["trace[1413813968] 'process raft request'  (duration: 100.02092ms)"],"step_count":1}
	
	
	==> gcp-auth [ac7e74d074bd9997be585172c57fb1a6c8161383dc7f811de09d617facf2a11a] <==
	2025/12/05 06:06:48 GCP Auth Webhook started!
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:01 Ready to marshal response ...
	2025/12/05 06:07:01 Ready to write response ...
	2025/12/05 06:07:10 Ready to marshal response ...
	2025/12/05 06:07:10 Ready to write response ...
	
	
	==> kernel <==
	 06:07:10 up 49 min,  0 user,  load average: 1.62, 0.93, 0.37
	Linux addons-177895 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25] <==
	I1205 06:05:42.170001       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:05:42.170051       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:05:42.170099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:05:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:05:42.388586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:05:42.388672       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:05:42.388707       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:05:42.388839       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1205 06:06:12.382558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 06:06:12.389034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1205 06:06:12.389152       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1205 06:06:12.392301       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1205 06:06:13.989673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:06:13.989700       1 metrics.go:72] Registering metrics
	I1205 06:06:13.989774       1 controller.go:711] "Syncing nftables rules"
	I1205 06:06:22.383146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:06:22.383191       1 main.go:301] handling current node
	I1205 06:06:32.384088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:06:32.384136       1 main.go:301] handling current node
	I1205 06:06:42.382420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:06:42.382461       1 main.go:301] handling current node
	I1205 06:06:52.383082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:06:52.383120       1 main.go:301] handling current node
	I1205 06:07:02.382953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:07:02.382988       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8] <==
	W1205 06:06:10.435560       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1205 06:06:10.450351       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1205 06:06:10.456569       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1205 06:06:22.523421       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.523483       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.523624       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.523652       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.548276       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.548302       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:22.551264       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.130.66:443: connect: connection refused
	E1205 06:06:22.551368       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.130.66:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.728089       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	W1205 06:06:25.728188       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:06:25.728249       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:06:25.728478       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.733536       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	E1205 06:06:25.754125       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.212.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.212.194:443: connect: connection refused" logger="UnhandledError"
	I1205 06:06:25.818741       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 06:07:08.570231       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57004: use of closed network connection
	E1205 06:07:08.706761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57034: use of closed network connection
	I1205 06:07:09.978282       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 06:07:10.153536       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.65.134"}
	
	
	==> kube-controller-manager [88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50] <==
	I1205 06:05:40.413397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 06:05:40.413416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:05:40.414498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 06:05:40.414516       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:05:40.416812       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:05:40.416891       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 06:05:40.417989       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:05:40.419174       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:05:40.419238       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:05:40.419280       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:05:40.419290       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:05:40.419297       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:05:40.423388       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 06:05:40.425128       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-177895" podCIDRs=["10.244.0.0/24"]
	I1205 06:05:40.428351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 06:05:40.433557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1205 06:05:42.420641       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1205 06:06:10.421758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 06:06:10.421879       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1205 06:06:10.421925       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1205 06:06:10.442180       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1205 06:06:10.445558       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1205 06:06:10.522532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:06:10.546706       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:06:25.369138       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601] <==
	I1205 06:05:42.132244       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:05:42.334171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:05:42.440544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:05:42.440625       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:05:42.440754       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:05:42.573216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:05:42.573343       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:05:42.580660       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:05:42.586181       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:05:42.586211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:05:42.587906       1 config.go:200] "Starting service config controller"
	I1205 06:05:42.588041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:05:42.588565       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:05:42.589005       1 config.go:309] "Starting node config controller"
	I1205 06:05:42.589033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:05:42.589041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:05:42.589875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:05:42.587783       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:05:42.597437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:05:42.597463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 06:05:42.688670       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:05:42.690416       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03] <==
	E1205 06:05:33.417364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:05:33.417476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:05:33.417517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:05:33.417514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 06:05:33.417555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:33.417243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:05:33.417641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:05:33.417640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:05:33.417645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:05:33.417649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:05:33.417735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:05:33.417749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:05:33.417748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:05:33.417756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:05:33.417819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:05:33.417848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:05:34.272028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:05:34.323411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 06:05:34.324167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:05:34.328208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:05:34.406554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:05:34.522074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:05:34.550108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:34.557019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1205 06:05:37.511972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:06:46 addons-177895 kubelet[1291]: I1205 06:06:46.795233    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gzlfd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:06:46 addons-177895 kubelet[1291]: I1205 06:06:46.796449    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vqq7b" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:06:46 addons-177895 kubelet[1291]: I1205 06:06:46.805279    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-gzlfd" podStartSLOduration=1.326435618 podStartE2EDuration="24.805263666s" podCreationTimestamp="2025-12-05 06:06:22 +0000 UTC" firstStartedPulling="2025-12-05 06:06:23.050187808 +0000 UTC m=+47.551842581" lastFinishedPulling="2025-12-05 06:06:46.529015866 +0000 UTC m=+71.030670629" observedRunningTime="2025-12-05 06:06:46.804676011 +0000 UTC m=+71.306330793" watchObservedRunningTime="2025-12-05 06:06:46.805263666 +0000 UTC m=+71.306918447"
	Dec 05 06:06:47 addons-177895 kubelet[1291]: I1205 06:06:47.799901    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gzlfd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:06:48 addons-177895 kubelet[1291]: I1205 06:06:48.816979    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-jpdgf" podStartSLOduration=50.626646323 podStartE2EDuration="59.816964408s" podCreationTimestamp="2025-12-05 06:05:49 +0000 UTC" firstStartedPulling="2025-12-05 06:06:38.959915298 +0000 UTC m=+63.461570058" lastFinishedPulling="2025-12-05 06:06:48.150233379 +0000 UTC m=+72.651888143" observedRunningTime="2025-12-05 06:06:48.816377697 +0000 UTC m=+73.318032477" watchObservedRunningTime="2025-12-05 06:06:48.816964408 +0000 UTC m=+73.318619189"
	Dec 05 06:06:51 addons-177895 kubelet[1291]: I1205 06:06:51.579190    1291 scope.go:117] "RemoveContainer" containerID="71a87d71db5a60b21be195701bb67a887cb1aff25da75d6c058cb3803ffb4c60"
	Dec 05 06:06:51 addons-177895 kubelet[1291]: I1205 06:06:51.833293    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-gb572" podStartSLOduration=65.35359116 podStartE2EDuration="1m9.833274159s" podCreationTimestamp="2025-12-05 06:05:42 +0000 UTC" firstStartedPulling="2025-12-05 06:06:46.600114734 +0000 UTC m=+71.101769495" lastFinishedPulling="2025-12-05 06:06:51.079797727 +0000 UTC m=+75.581452494" observedRunningTime="2025-12-05 06:06:51.832474873 +0000 UTC m=+76.334129655" watchObservedRunningTime="2025-12-05 06:06:51.833274159 +0000 UTC m=+76.334928940"
	Dec 05 06:06:52 addons-177895 kubelet[1291]: I1205 06:06:52.830825    1291 scope.go:117] "RemoveContainer" containerID="71a87d71db5a60b21be195701bb67a887cb1aff25da75d6c058cb3803ffb4c60"
	Dec 05 06:06:53 addons-177895 kubelet[1291]: I1205 06:06:53.631978    1291 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 05 06:06:53 addons-177895 kubelet[1291]: I1205 06:06:53.632013    1291 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 05 06:06:54 addons-177895 kubelet[1291]: I1205 06:06:54.036223    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bllr7\" (UniqueName: \"kubernetes.io/projected/1b064a63-5453-49c5-aeb2-07de46ca8bc9-kube-api-access-bllr7\") pod \"1b064a63-5453-49c5-aeb2-07de46ca8bc9\" (UID: \"1b064a63-5453-49c5-aeb2-07de46ca8bc9\") "
	Dec 05 06:06:54 addons-177895 kubelet[1291]: I1205 06:06:54.038952    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b064a63-5453-49c5-aeb2-07de46ca8bc9-kube-api-access-bllr7" (OuterVolumeSpecName: "kube-api-access-bllr7") pod "1b064a63-5453-49c5-aeb2-07de46ca8bc9" (UID: "1b064a63-5453-49c5-aeb2-07de46ca8bc9"). InnerVolumeSpecName "kube-api-access-bllr7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 05 06:06:54 addons-177895 kubelet[1291]: I1205 06:06:54.137150    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bllr7\" (UniqueName: \"kubernetes.io/projected/1b064a63-5453-49c5-aeb2-07de46ca8bc9-kube-api-access-bllr7\") on node \"addons-177895\" DevicePath \"\""
	Dec 05 06:06:54 addons-177895 kubelet[1291]: E1205 06:06:54.338647    1291 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 05 06:06:54 addons-177895 kubelet[1291]: E1205 06:06:54.338711    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb-gcr-creds podName:8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb nodeName:}" failed. No retries permitted until 2025-12-05 06:07:26.338696735 +0000 UTC m=+110.840351496 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb-gcr-creds") pod "registry-creds-764b6fb674-8p8pq" (UID: "8e5ef0f6-376d-4feb-a90b-6aed04a5c5cb") : secret "registry-creds-gcr" not found
	Dec 05 06:06:54 addons-177895 kubelet[1291]: I1205 06:06:54.843408    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00c1487e36394b2ebb34f78928ead98e2019c715d276ddf0b2260b432ce4e3e"
	Dec 05 06:06:54 addons-177895 kubelet[1291]: I1205 06:06:54.859685    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-gm8fx" podStartSLOduration=1.600770255 podStartE2EDuration="32.859666646s" podCreationTimestamp="2025-12-05 06:06:22 +0000 UTC" firstStartedPulling="2025-12-05 06:06:22.983979671 +0000 UTC m=+47.485634439" lastFinishedPulling="2025-12-05 06:06:54.242876067 +0000 UTC m=+78.744530830" observedRunningTime="2025-12-05 06:06:54.858750931 +0000 UTC m=+79.360405722" watchObservedRunningTime="2025-12-05 06:06:54.859666646 +0000 UTC m=+79.361321427"
	Dec 05 06:06:58 addons-177895 kubelet[1291]: I1205 06:06:58.875886    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-8r9xg" podStartSLOduration=73.159061019 podStartE2EDuration="1m16.875867178s" podCreationTimestamp="2025-12-05 06:05:42 +0000 UTC" firstStartedPulling="2025-12-05 06:06:54.701282254 +0000 UTC m=+79.202937018" lastFinishedPulling="2025-12-05 06:06:58.418088406 +0000 UTC m=+82.919743177" observedRunningTime="2025-12-05 06:06:58.874650904 +0000 UTC m=+83.376305685" watchObservedRunningTime="2025-12-05 06:06:58.875867178 +0000 UTC m=+83.377521960"
	Dec 05 06:07:01 addons-177895 kubelet[1291]: I1205 06:07:01.591044    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/815ba021-005d-4a49-9b68-12ac2d4fd4bc-gcp-creds\") pod \"busybox\" (UID: \"815ba021-005d-4a49-9b68-12ac2d4fd4bc\") " pod="default/busybox"
	Dec 05 06:07:01 addons-177895 kubelet[1291]: I1205 06:07:01.591088    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7q6\" (UniqueName: \"kubernetes.io/projected/815ba021-005d-4a49-9b68-12ac2d4fd4bc-kube-api-access-nv7q6\") pod \"busybox\" (UID: \"815ba021-005d-4a49-9b68-12ac2d4fd4bc\") " pod="default/busybox"
	Dec 05 06:07:02 addons-177895 kubelet[1291]: I1205 06:07:02.988725    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3555848799999999 podStartE2EDuration="1.988704713s" podCreationTimestamp="2025-12-05 06:07:01 +0000 UTC" firstStartedPulling="2025-12-05 06:07:01.758614491 +0000 UTC m=+86.260269251" lastFinishedPulling="2025-12-05 06:07:02.391734308 +0000 UTC m=+86.893389084" observedRunningTime="2025-12-05 06:07:02.988020957 +0000 UTC m=+87.489675738" watchObservedRunningTime="2025-12-05 06:07:02.988704713 +0000 UTC m=+87.490359493"
	Dec 05 06:07:09 addons-177895 kubelet[1291]: I1205 06:07:09.581376    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662b28bd-c9b6-4cc1-9d59-0d5334218649" path="/var/lib/kubelet/pods/662b28bd-c9b6-4cc1-9d59-0d5334218649/volumes"
	Dec 05 06:07:09 addons-177895 kubelet[1291]: I1205 06:07:09.581746    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20e83f6-ace5-404d-998c-ed93d8450ccd" path="/var/lib/kubelet/pods/f20e83f6-ace5-404d-998c-ed93d8450ccd/volumes"
	Dec 05 06:07:10 addons-177895 kubelet[1291]: I1205 06:07:10.251780    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmk4\" (UniqueName: \"kubernetes.io/projected/84ca3301-2a3c-4a90-876c-32de9785e34c-kube-api-access-bjmk4\") pod \"nginx\" (UID: \"84ca3301-2a3c-4a90-876c-32de9785e34c\") " pod="default/nginx"
	Dec 05 06:07:10 addons-177895 kubelet[1291]: I1205 06:07:10.251826    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/84ca3301-2a3c-4a90-876c-32de9785e34c-gcp-creds\") pod \"nginx\" (UID: \"84ca3301-2a3c-4a90-876c-32de9785e34c\") " pod="default/nginx"
	
	
	==> storage-provisioner [fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce] <==
	W1205 06:06:45.035727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:47.039658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:47.045734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:49.047695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:49.050521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:51.052995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:51.057297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:53.060689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:53.064221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:55.069735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:55.077898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:57.082108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:57.096939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:59.099481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:06:59.102773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:01.105670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:01.109621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:03.112665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:03.162139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:05.164188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:05.167315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:07.170546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:07.175023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:09.177682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:07:09.181371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-177895 -n addons-177895
helpers_test.go:269: (dbg) Run:  kubectl --context addons-177895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw registry-creds-764b6fb674-8p8pq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-177895 describe pod nginx ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw registry-creds-764b6fb674-8p8pq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-177895 describe pod nginx ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw registry-creds-764b6fb674-8p8pq: exit status 1 (68.852773ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-177895/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:07:10 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bjmk4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bjmk4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-177895
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-756km" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-98kcw" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8p8pq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-177895 describe pod nginx ingress-nginx-admission-create-756km ingress-nginx-admission-patch-98kcw registry-creds-764b6fb674-8p8pq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable headlamp --alsologtostderr -v=1: exit status 11 (262.520951ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:11.224319   27090 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:11.224652   27090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:11.224663   27090 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:11.224668   27090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:11.224858   27090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:11.225098   27090 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:11.225416   27090 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:11.225435   27090 addons.go:622] checking whether the cluster is paused
	I1205 06:07:11.225518   27090 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:11.225531   27090 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:11.225869   27090 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:11.245406   27090 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:11.245452   27090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:11.262831   27090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:11.361543   27090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:11.361657   27090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:11.398402   27090 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:11.398425   27090 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:11.398431   27090 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:11.398437   27090 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:11.398441   27090 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:11.398446   27090 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:11.398451   27090 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:11.398456   27090 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:11.398460   27090 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:11.398468   27090 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:11.398473   27090 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:11.398478   27090 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:11.398483   27090 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:11.398487   27090 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:11.398491   27090 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:11.398498   27090 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:11.398506   27090 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:11.398512   27090 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:11.398516   27090 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:11.398521   27090 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:11.398525   27090 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:11.398529   27090 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:11.398534   27090 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:11.398538   27090 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:11.398543   27090 cri.go:89] found id: ""
	I1205 06:07:11.398585   27090 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:11.417207   27090 out.go:203] 
	W1205 06:07:11.418641   27090 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:11.418668   27090 out.go:285] * 
	* 
	W1205 06:07:11.423752   27090 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:11.426121   27090 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-7zxgt" [4ba4634f-5d4a-4fb7-a3c4-8bbeda9ec15a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003582997s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (233.404312ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:27.435830   28576 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:27.436126   28576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:27.436136   28576 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:27.436140   28576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:27.436303   28576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:27.436558   28576 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:27.436870   28576 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:27.436887   28576 addons.go:622] checking whether the cluster is paused
	I1205 06:07:27.436963   28576 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:27.436976   28576 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:27.437353   28576 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:27.454140   28576 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:27.454189   28576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:27.469602   28576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:27.566401   28576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:27.566481   28576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:27.595882   28576 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:27.595902   28576 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:27.595906   28576 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:27.595909   28576 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:27.595913   28576 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:27.595916   28576 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:27.595919   28576 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:27.595922   28576 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:27.595924   28576 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:27.595932   28576 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:27.595935   28576 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:27.595938   28576 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:27.595940   28576 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:27.595943   28576 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:27.595946   28576 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:27.595951   28576 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:27.595954   28576 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:27.595958   28576 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:27.595961   28576 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:27.595964   28576 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:27.595966   28576 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:27.595969   28576 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:27.595972   28576 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:27.595975   28576 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:27.595978   28576 cri.go:89] found id: ""
	I1205 06:07:27.596018   28576 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:27.608785   28576 out.go:203] 
	W1205 06:07:27.609854   28576 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:27.609869   28576 out.go:285] * 
	* 
	W1205 06:07:27.612802   28576 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:27.614010   28576 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-177895 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-177895 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [68ed0553-91a1-4e74-b783-59bf39e63024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [68ed0553-91a1-4e74-b783-59bf39e63024] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [68ed0553-91a1-4e74-b783-59bf39e63024] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002861296s
addons_test.go:967: (dbg) Run:  kubectl --context addons-177895 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 ssh "cat /opt/local-path-provisioner/pvc-981059f5-0a3f-45ab-b5d0-3cd374252d92_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-177895 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-177895 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (236.735927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:30.367109   28941 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:30.367398   28941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:30.367407   28941 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:30.367411   28941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:30.367652   28941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:30.367914   28941 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:30.368213   28941 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:30.368230   28941 addons.go:622] checking whether the cluster is paused
	I1205 06:07:30.368306   28941 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:30.368330   28941 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:30.368774   28941 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:30.385664   28941 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:30.385716   28941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:30.402214   28941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:30.498192   28941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:30.498269   28941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:30.526131   28941 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:30.526152   28941 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:30.526156   28941 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:30.526159   28941 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:30.526162   28941 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:30.526165   28941 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:30.526168   28941 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:30.526172   28941 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:30.526177   28941 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:30.526185   28941 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:30.526194   28941 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:30.526199   28941 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:30.526207   28941 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:30.526212   28941 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:30.526219   28941 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:30.526235   28941 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:30.526244   28941 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:30.526248   28941 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:30.526252   28941 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:30.526254   28941 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:30.526257   28941 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:30.526260   28941 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:30.526265   28941 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:30.526270   28941 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:30.526277   28941 cri.go:89] found id: ""
	I1205 06:07:30.526345   28941 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:30.539126   28941 out.go:203] 
	W1205 06:07:30.540193   28941 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:30.540207   28941 out.go:285] * 
	* 
	W1205 06:07:30.543994   28941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:30.545306   28941 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vqq7b" [014d4d2c-8611-446b-b016-70d3ec670f7c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003639954s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (250.77863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:22.723691   28293 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:22.723980   28293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:22.723990   28293 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:22.723995   28293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:22.724261   28293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:22.724541   28293 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:22.724899   28293 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:22.724925   28293 addons.go:622] checking whether the cluster is paused
	I1205 06:07:22.725055   28293 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:22.725076   28293 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:22.725548   28293 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:22.744703   28293 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:22.744753   28293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:22.764953   28293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:22.866273   28293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:22.866379   28293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:22.900730   28293 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:22.900758   28293 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:22.900763   28293 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:22.900766   28293 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:22.900769   28293 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:22.900773   28293 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:22.900775   28293 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:22.900778   28293 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:22.900782   28293 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:22.900786   28293 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:22.900789   28293 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:22.900792   28293 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:22.900795   28293 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:22.900799   28293 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:22.900807   28293 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:22.900815   28293 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:22.900822   28293 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:22.900828   28293 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:22.900833   28293 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:22.900836   28293 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:22.900839   28293 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:22.900842   28293 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:22.900845   28293 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:22.900848   28293 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:22.900850   28293 cri.go:89] found id: ""
	I1205 06:07:22.900885   28293 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:22.914093   28293 out.go:203] 
	W1205 06:07:22.915011   28293 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:22.915028   28293 out.go:285] * 
	* 
	W1205 06:07:22.918039   28293 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:22.919182   28293 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qdmqt" [5749be3e-577b-4393-8279-7bd1951cee67] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002487528s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable yakd --alsologtostderr -v=1: exit status 11 (232.532145ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:21.302708   27997 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:21.302953   27997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:21.302963   27997 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:21.302967   27997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:21.303158   27997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:21.303401   27997 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:21.303701   27997 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:21.303719   27997 addons.go:622] checking whether the cluster is paused
	I1205 06:07:21.303796   27997 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:21.303807   27997 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:21.304137   27997 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:21.322103   27997 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:21.322155   27997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:21.338374   27997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:21.434183   27997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:21.434271   27997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:21.461105   27997 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:21.461128   27997 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:21.461142   27997 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:21.461146   27997 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:21.461149   27997 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:21.461153   27997 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:21.461156   27997 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:21.461158   27997 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:21.461161   27997 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:21.461170   27997 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:21.461174   27997 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:21.461177   27997 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:21.461179   27997 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:21.461182   27997 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:21.461185   27997 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:21.461190   27997 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:21.461195   27997 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:21.461199   27997 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:21.461202   27997 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:21.461204   27997 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:21.461207   27997 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:21.461210   27997 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:21.461213   27997 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:21.461215   27997 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:21.461218   27997 cri.go:89] found id: ""
	I1205 06:07:21.461251   27997 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:21.474070   27997 out.go:203] 
	W1205 06:07:21.475262   27997 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:21.475285   27997 out.go:285] * 
	* 
	W1205 06:07:21.478290   27997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:21.479572   27997 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-tff2n" [c53bb386-438d-4001-a0ba-bd25cb311601] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003014191s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-177895 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-177895 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (229.653135ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:07:17.488411   27743 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:07:17.488692   27743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:17.488701   27743 out.go:374] Setting ErrFile to fd 2...
	I1205 06:07:17.488706   27743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:07:17.488888   27743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:07:17.489110   27743 mustload.go:66] Loading cluster: addons-177895
	I1205 06:07:17.489415   27743 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:17.489431   27743 addons.go:622] checking whether the cluster is paused
	I1205 06:07:17.489516   27743 config.go:182] Loaded profile config "addons-177895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:07:17.489532   27743 host.go:66] Checking if "addons-177895" exists ...
	I1205 06:07:17.489877   27743 cli_runner.go:164] Run: docker container inspect addons-177895 --format={{.State.Status}}
	I1205 06:07:17.506844   27743 ssh_runner.go:195] Run: systemctl --version
	I1205 06:07:17.506886   27743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177895
	I1205 06:07:17.522867   27743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/addons-177895/id_rsa Username:docker}
	I1205 06:07:17.618439   27743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:07:17.618502   27743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:07:17.644799   27743 cri.go:89] found id: "16645d5e8e337667ca2b0bad647a81227cfec72020d59c23a4e68f032d5598c6"
	I1205 06:07:17.644820   27743 cri.go:89] found id: "819ee604de0dccef75d46d6eae654a1dd072d867151de8635b16c895e4950d0e"
	I1205 06:07:17.644826   27743 cri.go:89] found id: "7897ed230bdcbd81435d5be315a4f99c0ed622ebf143ac9f6e33de98d018efbd"
	I1205 06:07:17.644831   27743 cri.go:89] found id: "bd0232ddd5627f091b2c410b8cb42a6118e9f1fdc519f3ab4b9266b6e16f7ba0"
	I1205 06:07:17.644835   27743 cri.go:89] found id: "d658de91425e031a8c2952d527c312d61f95d2cb37f908c4a57d1fb3ef35819f"
	I1205 06:07:17.644840   27743 cri.go:89] found id: "4c91c5eca37596bf0601b5ce43781074c00a3a76c2bc0dec622362735b0d29df"
	I1205 06:07:17.644845   27743 cri.go:89] found id: "b1cef4ce17c1443081b44bb0b3a21a6519153cfdf0d42d04331007792bb307a0"
	I1205 06:07:17.644850   27743 cri.go:89] found id: "3bcfb73c2da0e1e8fcd9e116d93960799620a2d75e635954668ec6069b73676b"
	I1205 06:07:17.644854   27743 cri.go:89] found id: "1daa53d0ceb644fd534cdff42144fa2cfb582359790bf3347fd6e506edbb719e"
	I1205 06:07:17.644861   27743 cri.go:89] found id: "a1990665675a8feca6beca0c59735e2ffc0e66bcdf6601ce9c394d2ba4ca8a89"
	I1205 06:07:17.644869   27743 cri.go:89] found id: "32921b8595d6e5192e8797a692755c418684f0baa24fb9e7506761120bbf02b8"
	I1205 06:07:17.644872   27743 cri.go:89] found id: "0be783dd8c5fdc63398f6c518b7c4b5309e8d6d66f031ef7144f255d1b8fec99"
	I1205 06:07:17.644874   27743 cri.go:89] found id: "f88019728f44caa4dc6d9a4f7ba4a158d577b1b52dcc0faf29ecc1a7e17275da"
	I1205 06:07:17.644877   27743 cri.go:89] found id: "6e7946313d15aa69cacac17a6d05c21d9ae6cfb4478c51d2a40290f2e03d2fa2"
	I1205 06:07:17.644880   27743 cri.go:89] found id: "bc1820c39f3917b2171f213ffc60df09b930eabdba2d284e1feca6f3789937eb"
	I1205 06:07:17.644886   27743 cri.go:89] found id: "eae7b2e3083fcc2f1509ad0104fa2d756c583ff6b7849b6ae1e68b338faa573e"
	I1205 06:07:17.644892   27743 cri.go:89] found id: "939f9276ecdd3d76cdbb2a2750ba3fced93176791ff343d19320cf008ea9b5a7"
	I1205 06:07:17.644897   27743 cri.go:89] found id: "fae790e0ec5bc4cb4d89976b9010d11cfc95f9aadb13651c4f95f4829cf5ccce"
	I1205 06:07:17.644900   27743 cri.go:89] found id: "e2c0cd58d28ef859852ce4b0e2ab13852ff1aa6b5afc870f927d0e7a8356f601"
	I1205 06:07:17.644903   27743 cri.go:89] found id: "36b03b6292161bd88331f1a84ab816c26572c09793b31667d1b127dfa1cc6c25"
	I1205 06:07:17.644906   27743 cri.go:89] found id: "d693c2ca57323e526ad7a7fbbf1c6e42df76979ca5d7c641c0525f20e73a4e03"
	I1205 06:07:17.644908   27743 cri.go:89] found id: "88d316347724ef2dbe886f3089cc4b7a9c73f3622eeb8b2058b0d45583babc50"
	I1205 06:07:17.644911   27743 cri.go:89] found id: "7e02812d9d79094303263ed692c38c25a48374d45a069deb5fd6a1c3b8d14ef8"
	I1205 06:07:17.644913   27743 cri.go:89] found id: "a7443800072745c05b5d0b3f10899088dc9f1874282e420af994725141a36fa0"
	I1205 06:07:17.644916   27743 cri.go:89] found id: ""
	I1205 06:07:17.644970   27743 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:07:17.657982   27743 out.go:203] 
	W1205 06:07:17.659192   27743 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:07:17.659207   27743 out.go:285] * 
	* 
	W1205 06:07:17.662204   27743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:07:17.663375   27743 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-177895 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-882265 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-882265 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dwgzf" [aadecfa0-4200-469d-8b29-1d8f78bbe89d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-882265 -n functional-882265
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-05 06:23:16.50485051 +0000 UTC m=+1112.889097857
functional_test.go:1645: (dbg) Run:  kubectl --context functional-882265 describe po hello-node-connect-7d85dfc575-dwgzf -n default
functional_test.go:1645: (dbg) kubectl --context functional-882265 describe po hello-node-connect-7d85dfc575-dwgzf -n default:
Name:             hello-node-connect-7d85dfc575-dwgzf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-882265/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:13:16 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c8kk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6c8kk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dwgzf to functional-882265
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-882265 logs hello-node-connect-7d85dfc575-dwgzf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-882265 logs hello-node-connect-7d85dfc575-dwgzf -n default: exit status 1 (56.851826ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dwgzf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-882265 logs hello-node-connect-7d85dfc575-dwgzf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-882265 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-dwgzf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-882265/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:13:16 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c8kk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6c8kk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dwgzf to functional-882265
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-882265 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-882265 logs -l app=hello-node-connect: exit status 1 (57.840745ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dwgzf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-882265 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-882265 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.27.82
IPs:                      10.100.27.82
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31444/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-882265
helpers_test.go:243: (dbg) docker inspect functional-882265:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa",
	        "Created": "2025-12-05T06:10:57.229908083Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:10:57.260313624Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa/hosts",
	        "LogPath": "/var/lib/docker/containers/4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa/4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa-json.log",
	        "Name": "/functional-882265",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-882265:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-882265",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a31c6ac8575caf29b9a8bc06064ed6cbf5d290936d554e2e667ac51f99ab7aa",
	                "LowerDir": "/var/lib/docker/overlay2/f31bf113537835ab2345550ce62ad5bdfcaba1f307e10700647381985f62386e-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f31bf113537835ab2345550ce62ad5bdfcaba1f307e10700647381985f62386e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f31bf113537835ab2345550ce62ad5bdfcaba1f307e10700647381985f62386e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f31bf113537835ab2345550ce62ad5bdfcaba1f307e10700647381985f62386e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-882265",
	                "Source": "/var/lib/docker/volumes/functional-882265/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-882265",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-882265",
	                "name.minikube.sigs.k8s.io": "functional-882265",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "07b008fbb75183096fa9d12c9f786fb8527014df588188f151c7a7c8b79c7850",
	            "SandboxKey": "/var/run/docker/netns/07b008fbb751",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-882265": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95298503e70287f5c26cdbb6c8dac690e972d89853d48ee16482226969a29470",
	                    "EndpointID": "3e332fe6b62baebe925bbdbe9cae9b557c07dd7533be655ca83377e1e9e2056b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "c2:36:3a:e7:c2:92",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-882265",
	                        "4a31c6ac8575"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-882265 -n functional-882265
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 logs -n 25: (1.180389534s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-882265 ssh sudo systemctl is-active docker                                                  │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ ssh            │ functional-882265 ssh sudo systemctl is-active containerd                                              │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ license        │                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /etc/ssl/certs/16314.pem                                                │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /usr/share/ca-certificates/16314.pem                                    │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /etc/test/nested/copy/16314/hosts                                       │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /etc/ssl/certs/163142.pem                                               │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /usr/share/ca-certificates/163142.pem                                   │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ image          │ functional-882265 image ls --format short --alsologtostderr                                            │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ image          │ functional-882265 image ls --format yaml --alsologtostderr                                             │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ image          │ functional-882265 image ls --format json --alsologtostderr                                             │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ image          │ functional-882265 image ls --format table --alsologtostderr                                            │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ ssh            │ functional-882265 ssh pgrep buildkitd                                                                  │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ image          │ functional-882265 image build -t localhost/my-image:functional-882265 testdata/build --alsologtostderr │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ image          │ functional-882265 image ls                                                                             │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ update-context │ functional-882265 update-context --alsologtostderr -v=2                                                │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ update-context │ functional-882265 update-context --alsologtostderr -v=2                                                │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ update-context │ functional-882265 update-context --alsologtostderr -v=2                                                │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │ 05 Dec 25 06:13 UTC │
	│ service        │ functional-882265 service list                                                                         │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │ 05 Dec 25 06:23 UTC │
	│ service        │ functional-882265 service list -o json                                                                 │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │ 05 Dec 25 06:23 UTC │
	│ service        │ functional-882265 service --namespace=default --https --url hello-node                                 │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ service        │ functional-882265 service hello-node --url --format={{.IP}}                                            │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ service        │ functional-882265 service hello-node --url                                                             │ functional-882265 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:13:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:13:15.304693   52854 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:15.304773   52854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:15.304780   52854 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:15.304784   52854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:15.304990   52854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:13:15.305400   52854 out.go:368] Setting JSON to false
	I1205 06:13:15.306257   52854 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3339,"bootTime":1764911856,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:13:15.306313   52854 start.go:143] virtualization: kvm guest
	I1205 06:13:15.307932   52854 out.go:179] * [functional-882265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:13:15.309541   52854 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:13:15.309528   52854 notify.go:221] Checking for updates...
	I1205 06:13:15.310914   52854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:13:15.311989   52854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:13:15.313167   52854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:13:15.314231   52854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:13:15.315301   52854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:13:15.316612   52854 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:15.317136   52854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:13:15.339382   52854 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:13:15.339471   52854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:13:15.389342   52854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:13:15.38045819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:13:15.389453   52854 docker.go:319] overlay module found
	I1205 06:13:15.390955   52854 out.go:179] * Using the docker driver based on existing profile
	I1205 06:13:15.392066   52854 start.go:309] selected driver: docker
	I1205 06:13:15.392077   52854 start.go:927] validating driver "docker" against &{Name:functional-882265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-882265 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:13:15.392180   52854 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:13:15.392262   52854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:13:15.445811   52854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:13:15.436496067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:13:15.446536   52854 cni.go:84] Creating CNI manager for ""
	I1205 06:13:15.446603   52854 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:13:15.446672   52854 start.go:353] cluster config:
	{Name:functional-882265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-882265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:13:15.448225   52854 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.674870391Z" level=info msg="Creating container: default/mysql-5bb876957f-x2kd8/mysql" id=2b972f04-2bd4-4ab1-9403-3f4717fd9e05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.674993177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.679880877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.680608768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.705492189Z" level=info msg="Created container bf0ef6f0b614e44e14dca7aa81755f72af8ba5f6d525813792bea22744c5f11b: default/mysql-5bb876957f-x2kd8/mysql" id=2b972f04-2bd4-4ab1-9403-3f4717fd9e05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.706091573Z" level=info msg="Starting container: bf0ef6f0b614e44e14dca7aa81755f72af8ba5f6d525813792bea22744c5f11b" id=d841a113-2ce5-4fa8-97a6-030442e1843e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:13:36 functional-882265 crio[3592]: time="2025-12-05T06:13:36.70784596Z" level=info msg="Started container" PID=7799 containerID=bf0ef6f0b614e44e14dca7aa81755f72af8ba5f6d525813792bea22744c5f11b description=default/mysql-5bb876957f-x2kd8/mysql id=d841a113-2ce5-4fa8-97a6-030442e1843e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e84db5c349c641cef58296035cf56e13138ad01df93d83d0301f8be4fb7bf3d9
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.475725859Z" level=info msg="Stopping pod sandbox: 7bf436e121933e97917833f3a6bbc1e87e0bafcae92a31b76b1741a08ac3c89d" id=e59dfe93-3364-4f28-83b8-ab684a19b06a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.475785679Z" level=info msg="Stopped pod sandbox (already stopped): 7bf436e121933e97917833f3a6bbc1e87e0bafcae92a31b76b1741a08ac3c89d" id=e59dfe93-3364-4f28-83b8-ab684a19b06a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.476182596Z" level=info msg="Removing pod sandbox: 7bf436e121933e97917833f3a6bbc1e87e0bafcae92a31b76b1741a08ac3c89d" id=963ed624-b501-4aca-a474-7cef1f0c9dee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.478487944Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.478557006Z" level=info msg="Removed pod sandbox: 7bf436e121933e97917833f3a6bbc1e87e0bafcae92a31b76b1741a08ac3c89d" id=963ed624-b501-4aca-a474-7cef1f0c9dee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.479009688Z" level=info msg="Stopping pod sandbox: 783e17e83dd7e70dacf6810825a08f785bae7b0624e281ab1002bdeb4f48c884" id=212637c0-d4b9-47ff-aa36-52334573d6a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.479045084Z" level=info msg="Stopped pod sandbox (already stopped): 783e17e83dd7e70dacf6810825a08f785bae7b0624e281ab1002bdeb4f48c884" id=212637c0-d4b9-47ff-aa36-52334573d6a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.479315814Z" level=info msg="Removing pod sandbox: 783e17e83dd7e70dacf6810825a08f785bae7b0624e281ab1002bdeb4f48c884" id=acb172b9-00bf-4c5b-b34a-fa6a646ab210 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.481714359Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:13:39 functional-882265 crio[3592]: time="2025-12-05T06:13:39.481768829Z" level=info msg="Removed pod sandbox: 783e17e83dd7e70dacf6810825a08f785bae7b0624e281ab1002bdeb4f48c884" id=acb172b9-00bf-4c5b-b34a-fa6a646ab210 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:42 functional-882265 crio[3592]: time="2025-12-05T06:13:42.463874088Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=488252fd-7e46-4325-b3a3-12a69a88f077 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:13:53 functional-882265 crio[3592]: time="2025-12-05T06:13:53.464387191Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=75d2b103-143e-43d4-9329-e35410a10557 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:14:23 functional-882265 crio[3592]: time="2025-12-05T06:14:23.463841988Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e8c6f755-b281-4bb4-93b9-d18ddd1c85cf name=/runtime.v1.ImageService/PullImage
	Dec 05 06:14:38 functional-882265 crio[3592]: time="2025-12-05T06:14:38.464167922Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=17cc07c1-03ce-4145-8ba4-52fae5d69ef1 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:15:44 functional-882265 crio[3592]: time="2025-12-05T06:15:44.464229459Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65c406c0-44e2-4d27-85f2-e5fb9dcbcec7 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:16:06 functional-882265 crio[3592]: time="2025-12-05T06:16:06.464191495Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a607a3a5-9208-435b-893f-04388effff76 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:18:39 functional-882265 crio[3592]: time="2025-12-05T06:18:39.464682429Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8d1f28bc-d8a7-4785-bcb0-1bb97b48db7d name=/runtime.v1.ImageService/PullImage
	Dec 05 06:18:50 functional-882265 crio[3592]: time="2025-12-05T06:18:50.464248064Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d98546b-df47-4d6f-90ad-b5698a87d9c2 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	bf0ef6f0b614e       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   e84db5c349c64       mysql-5bb876957f-x2kd8                       default
	4dd1ddf2e380b       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   2d9fe2e2a3eb5       sp-pod                                       default
	31533cee97cb0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   a1135fa1e1b42       kubernetes-dashboard-855c9754f9-2zkzg        kubernetes-dashboard
	d9ec80e611f1f       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   3c6dcf68c9532       dashboard-metrics-scraper-77bf4d6c4c-bjswc   kubernetes-dashboard
	3128c9ce49406       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   b84626a90eac3       nginx-svc                                    default
	e27605e3871c9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   1c9b9bcdd03d3       busybox-mount                                default
	b7099121fa2d7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Running             kube-apiserver              0                   2ecce4ae2a7a9       kube-apiserver-functional-882265             kube-system
	b1634aad4b79a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 10 minutes ago      Running             kube-scheduler              1                   33da2526a9c53       kube-scheduler-functional-882265             kube-system
	c1a86d3436009       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Running             kube-controller-manager     2                   a80d43368227d       kube-controller-manager-functional-882265    kube-system
	a13b0707d2f76       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Created             kube-controller-manager     1                   a80d43368227d       kube-controller-manager-functional-882265    kube-system
	26e10e69f8046       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Created             kube-apiserver              1                   4587de8551e7e       kube-apiserver-functional-882265             kube-system
	b4d6dbb0e7737       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   27ae8158a20eb       etcd-functional-882265                       kube-system
	35ade64d8ac53       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 10 minutes ago      Running             kube-proxy                  1                   2ad82d86fc099       kube-proxy-wqvt6                             kube-system
	2e0bceb46c29f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   bd74b787b0cfc       kindnet-p7d8g                                kube-system
	d8083146eeb9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   0a1182dbb8608       storage-provisioner                          kube-system
	1fbf25f0dac10       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   f2055c0ee8351       coredns-66bc5c9577-vv7nt                     kube-system
	d3e2f1ad3efd9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   f2055c0ee8351       coredns-66bc5c9577-vv7nt                     kube-system
	5b19e586d9e9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   0a1182dbb8608       storage-provisioner                          kube-system
	45e085f51d7dd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   bd74b787b0cfc       kindnet-p7d8g                                kube-system
	112e8161bde0e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 12 minutes ago      Exited              kube-proxy                  0                   2ad82d86fc099       kube-proxy-wqvt6                             kube-system
	35145e3b42939       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 12 minutes ago      Exited              etcd                        0                   27ae8158a20eb       etcd-functional-882265                       kube-system
	a27c863b786f0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 12 minutes ago      Exited              kube-scheduler              0                   33da2526a9c53       kube-scheduler-functional-882265             kube-system
	
	
	==> coredns [1fbf25f0dac102457da63554bb2d20a2a39b7c48b648f72870a297c4c9813dad] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53787 - 15450 "HINFO IN 8030538043770282654.2019537506377899656. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078591723s
	
	
	==> coredns [d3e2f1ad3efd9d30272d8c3904f9e5a9a3d0b1b72e10b61d1fe7552a00f4fe28] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60249 - 53013 "HINFO IN 3931487357217892583.7284577328828020418. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.465716699s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-882265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-882265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=functional-882265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_11_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:11:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-882265
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:23:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:21:43 +0000   Fri, 05 Dec 2025 06:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:21:43 +0000   Fri, 05 Dec 2025 06:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:21:43 +0000   Fri, 05 Dec 2025 06:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:21:43 +0000   Fri, 05 Dec 2025 06:11:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-882265
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                29ab3f66-5204-4c6f-a1d9-cf4343e375d6
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l9cxp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-dwgzf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-x2kd8                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m48s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-vv7nt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-882265                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-p7d8g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-882265              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-882265     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wqvt6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-882265              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-bjswc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2zkzg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-882265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-882265 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-882265 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-882265 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-882265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-882265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-882265 event: Registered Node functional-882265 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-882265 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-882265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-882265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-882265 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-882265 event: Registered Node functional-882265 in Controller
	
	
	==> dmesg <==
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 5 06:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.022771] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023920] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023880] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +2.047782] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +4.032580] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +8.063178] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[ +16.381345] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[Dec 5 06:08] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	
	
	==> etcd [35145e3b4293967b1f23ec9101aafda640cbb90f56e60a15d5c4618a3567e64d] <==
	{"level":"warn","ts":"2025-12-05T06:11:05.991591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:05.998489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:06.006005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:06.035603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:06.042764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:06.048951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:06.094805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:12:20.243723Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-05T06:12:20.243798Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-882265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-05T06:12:20.243908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T06:12:27.245372Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T06:12:27.245485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-05T06:12:27.245595Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:12:27.245775Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T06:12:27.245791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-05T06:12:27.245683Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:12:27.245808Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T06:12:27.245816Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:12:27.245720Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-05T06:12:27.245866Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-05T06:12:27.245879Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-05T06:12:27.247925Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-05T06:12:27.247984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:12:27.248006Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-05T06:12:27.248011Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-882265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b4d6dbb0e7737fe75b7803bc4b548661af1444cc25f8ec406b011988768529e6] <==
	{"level":"warn","ts":"2025-12-05T06:12:40.886532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.892728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.899454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.905947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.912428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.918595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.924723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.931652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.939331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.946608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.953982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.962740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.970315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.977034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.983385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:40.989894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:41.006287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:41.012119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:41.017795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:41.058872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:13:42.965618Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.158868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:13:42.965708Z","caller":"traceutil/trace.go:172","msg":"trace[895697675] range","detail":"{range_begin:/registry/jobs; range_end:; response_count:0; response_revision:886; }","duration":"111.269906ms","start":"2025-12-05T06:13:42.854425Z","end":"2025-12-05T06:13:42.965695Z","steps":["trace[895697675] 'range keys from in-memory index tree'  (duration: 111.097519ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:22:40.588117Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1166}
	{"level":"info","ts":"2025-12-05T06:22:40.607294Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1166,"took":"18.808917ms","hash":2720831796,"current-db-size-bytes":3477504,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-05T06:22:40.607342Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2720831796,"revision":1166,"compact-revision":-1}
	
	
	==> kernel <==
	 06:23:17 up  1:05,  0 user,  load average: 0.08, 0.19, 0.31
	Linux functional-882265 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e0bceb46c29f3d656b6ce1cf2523b4a369e7ad0fb5b27265c670e029c4ba89a] <==
	I1205 06:21:11.399574       1 main.go:301] handling current node
	I1205 06:21:21.403425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:21:21.403453       1 main.go:301] handling current node
	I1205 06:21:31.403945       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:21:31.403989       1 main.go:301] handling current node
	I1205 06:21:41.399676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:21:41.399708       1 main.go:301] handling current node
	I1205 06:21:51.400404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:21:51.400436       1 main.go:301] handling current node
	I1205 06:22:01.402417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:01.402451       1 main.go:301] handling current node
	I1205 06:22:11.400209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:11.400247       1 main.go:301] handling current node
	I1205 06:22:21.405179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:21.405220       1 main.go:301] handling current node
	I1205 06:22:31.400626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:31.400658       1 main.go:301] handling current node
	I1205 06:22:41.399285       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:41.399356       1 main.go:301] handling current node
	I1205 06:22:51.407409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:22:51.407449       1 main.go:301] handling current node
	I1205 06:23:01.399692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:23:01.399722       1 main.go:301] handling current node
	I1205 06:23:11.398809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:23:11.398844       1 main.go:301] handling current node
	
	
	==> kindnet [45e085f51d7dd5e8cde0f03ccfa096b473ed6cac72ca340a2fe2132e400d9d36] <==
	I1205 06:11:15.059344       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 06:11:15.059555       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1205 06:11:15.059787       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:11:15.059806       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:11:15.059839       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:11:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:11:15.258100       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:11:15.258127       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:11:15.258139       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:11:15.258524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1205 06:11:45.260079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1205 06:11:45.260092       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 06:11:45.260081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1205 06:11:45.260123       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1205 06:11:46.558739       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:11:46.558761       1 metrics.go:72] Registering metrics
	I1205 06:11:46.558812       1 controller.go:711] "Syncing nftables rules"
	I1205 06:11:55.265384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:11:55.265447       1 main.go:301] handling current node
	I1205 06:12:05.265397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:12:05.265428       1 main.go:301] handling current node
	I1205 06:12:15.260013       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:12:15.260044       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26e10e69f8046a9ccb475254b9d5aec234c2dc265f3651a4f74c8c1c9b07c05c] <==
	
	
	==> kube-apiserver [b7099121fa2d70ef6a8df220382c11529bba9693be90bee4b620cce1452e07f7] <==
	I1205 06:12:41.537340       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 06:12:41.539541       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:12:41.539541       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:12:42.408225       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1205 06:12:42.713595       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1205 06:12:42.714656       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 06:12:42.718495       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:12:43.297421       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:12:43.381185       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:12:43.424064       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:12:43.428356       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:12:45.187884       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 06:13:00.785912       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.104.211"}
	I1205 06:13:04.724270       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.81.214"}
	I1205 06:13:07.742508       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.124.225"}
	I1205 06:13:16.193005       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.27.82"}
	I1205 06:13:16.294826       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 06:13:16.386941       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.98.123"}
	I1205 06:13:16.396455       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.244.208"}
	E1205 06:13:20.795884       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56118: use of closed network connection
	E1205 06:13:28.820586       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56172: use of closed network connection
	I1205 06:13:29.227824       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.210.226"}
	E1205 06:13:44.349218       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:46998: use of closed network connection
	E1205 06:13:45.681241       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47006: use of closed network connection
	I1205 06:22:41.431151       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a13b0707d2f7670b0b8497f1ea4b009256827e648d790588e6980e11208786d2] <==
	
	
	==> kube-controller-manager [c1a86d343600903d614ae1fadfdda296cec9ea4d12a3224bac94c539a5140ad9] <==
	I1205 06:12:44.835179       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 06:12:44.835188       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 06:12:44.835217       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 06:12:44.835170       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1205 06:12:44.835179       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:12:44.835304       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 06:12:44.835674       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:12:44.836709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 06:12:44.836787       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 06:12:44.836881       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-882265"
	I1205 06:12:44.837627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 06:12:44.837797       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:12:44.838927       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:12:44.839962       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 06:12:44.841144       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 06:12:44.842265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:12:44.843366       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:12:44.858104       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:12:44.860384       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1205 06:13:16.332478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:13:16.336178       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:13:16.341682       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:13:16.342212       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:13:16.346351       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:13:16.351335       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [112e8161bde0eb3a28993c45f2a0be75453ea12d325c6bf3b84c7aeaedb3adf6] <==
	I1205 06:11:14.914200       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:11:14.979816       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:11:15.080831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:11:15.080861       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:11:15.080948       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:11:15.097957       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:11:15.097993       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:11:15.102836       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:11:15.103150       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:11:15.103167       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:11:15.104250       1 config.go:200] "Starting service config controller"
	I1205 06:11:15.104264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:11:15.104295       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:11:15.104302       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:11:15.104365       1 config.go:309] "Starting node config controller"
	I1205 06:11:15.104381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:11:15.104387       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:11:15.104419       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:11:15.104428       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:11:15.204775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:11:15.204802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:11:15.204816       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [35ade64d8ac53259464510738d738b08f2c9cd954c4c17f41f5405936201ebe7] <==
	I1205 06:12:21.095581       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:12:21.154806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:12:21.255871       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:12:21.255896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:12:21.255948       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:12:21.273775       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:12:21.273810       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:12:21.278819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:12:21.279139       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:12:21.279176       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:12:21.282141       1 config.go:200] "Starting service config controller"
	I1205 06:12:21.282163       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:12:21.282223       1 config.go:309] "Starting node config controller"
	I1205 06:12:21.282241       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:12:21.282248       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:12:21.282268       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:12:21.282274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:12:21.282295       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:12:21.282301       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:12:21.382249       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:12:21.382377       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:12:21.382394       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E1205 06:12:41.449756       1 reflector.go:205] "Failed to watch" err="nodes \"functional-882265\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:12:41.449772       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1205 06:12:41.450401       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-scheduler [a27c863b786f083344ef40d4daded0d5adc40e2f439661137fbc7496c4c2d5bb] <==
	E1205 06:11:06.485962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:11:06.486022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:11:06.486046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:11:06.486086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:11:06.485972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:11:06.486184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:11:06.486230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:11:07.372511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:11:07.395709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:11:07.424013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 06:11:07.432904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:11:07.437841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:11:07.438600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:11:07.523441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:11:07.581471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:11:07.593598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:11:07.664542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:11:07.675488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1205 06:11:07.983703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:12:37.973735       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 06:12:37.973807       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:12:37.973871       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1205 06:12:37.973915       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1205 06:12:37.973926       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1205 06:12:37.973967       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b1634aad4b79ae6ec47832e8ff30ce1fdc7824fa89d2fd1ed433fea62b377044] <==
	I1205 06:12:40.337312       1 serving.go:386] Generated self-signed cert in-memory
	W1205 06:12:41.431111       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 06:12:41.431146       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 06:12:41.431158       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 06:12:41.431168       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 06:12:41.452895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 06:12:41.453016       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:12:41.455181       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:12:41.455221       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:12:41.455442       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:12:41.455476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 06:12:41.555726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:20:29 functional-882265 kubelet[4307]: E1205 06:20:29.464276    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:20:34 functional-882265 kubelet[4307]: E1205 06:20:34.463774    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:20:44 functional-882265 kubelet[4307]: E1205 06:20:44.463746    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:20:49 functional-882265 kubelet[4307]: E1205 06:20:49.463969    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:20:58 functional-882265 kubelet[4307]: E1205 06:20:58.463613    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:21:04 functional-882265 kubelet[4307]: E1205 06:21:04.463044    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:21:12 functional-882265 kubelet[4307]: E1205 06:21:12.463357    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:21:18 functional-882265 kubelet[4307]: E1205 06:21:18.463747    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:21:23 functional-882265 kubelet[4307]: E1205 06:21:23.463210    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:21:30 functional-882265 kubelet[4307]: E1205 06:21:30.463059    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:21:35 functional-882265 kubelet[4307]: E1205 06:21:35.463201    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:21:43 functional-882265 kubelet[4307]: E1205 06:21:43.463571    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:21:50 functional-882265 kubelet[4307]: E1205 06:21:50.463541    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:21:54 functional-882265 kubelet[4307]: E1205 06:21:54.463172    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:22:05 functional-882265 kubelet[4307]: E1205 06:22:05.464034    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:22:08 functional-882265 kubelet[4307]: E1205 06:22:08.463797    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:22:20 functional-882265 kubelet[4307]: E1205 06:22:20.463218    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:22:22 functional-882265 kubelet[4307]: E1205 06:22:22.462917    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:22:35 functional-882265 kubelet[4307]: E1205 06:22:35.463445    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:22:37 functional-882265 kubelet[4307]: E1205 06:22:37.463214    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:22:48 functional-882265 kubelet[4307]: E1205 06:22:48.463915    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:22:52 functional-882265 kubelet[4307]: E1205 06:22:52.463053    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:22:59 functional-882265 kubelet[4307]: E1205 06:22:59.464273    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	Dec 05 06:23:07 functional-882265 kubelet[4307]: E1205 06:23:07.463145    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l9cxp" podUID="eb1f37d6-a693-47c5-b208-981bcb77c994"
	Dec 05 06:23:12 functional-882265 kubelet[4307]: E1205 06:23:12.463180    4307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dwgzf" podUID="aadecfa0-4200-469d-8b29-1d8f78bbe89d"
	
	
	==> kubernetes-dashboard [31533cee97cb04e614e5a19bcf4fd38c2928ecbf9338f9b3d6fe5475f10b867f] <==
	2025/12/05 06:13:20 Starting overwatch
	2025/12/05 06:13:20 Using namespace: kubernetes-dashboard
	2025/12/05 06:13:20 Using in-cluster config to connect to apiserver
	2025/12/05 06:13:20 Using secret token for csrf signing
	2025/12/05 06:13:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 06:13:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 06:13:20 Successful initial request to the apiserver, version: v1.34.2
	2025/12/05 06:13:20 Generating JWE encryption key
	2025/12/05 06:13:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 06:13:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 06:13:21 Initializing JWE encryption key from synchronized object
	2025/12/05 06:13:21 Creating in-cluster Sidecar client
	2025/12/05 06:13:21 Successful request to sidecar
	2025/12/05 06:13:21 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [5b19e586d9e9f4b41fb86ef89eaaae6d3a91d76d6426e2c2c47abec7289c9540] <==
	W1205 06:11:55.904477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:55.908867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 06:11:56.002285       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-882265_ff9f4b49-92b8-41f1-a563-3dec1aea18e7!
	W1205 06:11:57.912010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:57.915457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:59.918671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:59.925850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:01.928623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:01.932386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:03.934855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:03.940340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:05.944024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:05.948674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:07.950916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:07.954068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:09.957129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:09.960587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:11.963927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:11.968850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:13.971997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:13.975855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:15.978476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:15.983968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:17.986853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:12:17.990062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d8083146eeb9e961c7d8d7049b62a1a63c3bedb2d07259ed1c396226d5036201] <==
	W1205 06:22:52.453456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:54.455726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:54.458987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:56.461108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:56.464602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:58.467163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:22:58.471053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:00.474080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:00.478567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:02.480768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:02.484363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:04.487047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:04.491240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:06.493214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:06.496821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:08.499172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:08.503590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:10.506105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:10.509684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:12.513015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:12.516463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:14.519270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:14.522677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:16.525061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:23:16.533342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-882265 -n functional-882265
helpers_test.go:269: (dbg) Run:  kubectl --context functional-882265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l9cxp hello-node-connect-7d85dfc575-dwgzf
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-882265 describe pod busybox-mount hello-node-75c85bcc94-l9cxp hello-node-connect-7d85dfc575-dwgzf
helpers_test.go:290: (dbg) kubectl --context functional-882265 describe pod busybox-mount hello-node-75c85bcc94-l9cxp hello-node-connect-7d85dfc575-dwgzf:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-882265/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:13:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e27605e3871c99425964696685ee0e0a14b64079d6ca0c504cd8316e0b36272d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 05 Dec 2025 06:13:08 +0000
	      Finished:     Fri, 05 Dec 2025 06:13:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gv2bz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gv2bz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-882265
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 736ms (736ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l9cxp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-882265/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:13:04 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvjr7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kvjr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l9cxp to functional-882265
	  Normal   Pulling    7m34s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m34s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m34s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-dwgzf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-882265/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:13:16 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c8kk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6c8kk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dwgzf to functional-882265
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-882265 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-882265 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l9cxp" [eb1f37d6-a693-47c5-b208-981bcb77c994] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-882265 -n functional-882265
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-05 06:23:05.038034507 +0000 UTC m=+1101.422281854
functional_test.go:1460: (dbg) Run:  kubectl --context functional-882265 describe po hello-node-75c85bcc94-l9cxp -n default
functional_test.go:1460: (dbg) kubectl --context functional-882265 describe po hello-node-75c85bcc94-l9cxp -n default:
Name:             hello-node-75c85bcc94-l9cxp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-882265/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:13:04 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvjr7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kvjr7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l9cxp to functional-882265
Normal   Pulling    7m21s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m21s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m21s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-882265 logs hello-node-75c85bcc94-l9cxp -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-882265 logs hello-node-75c85bcc94-l9cxp -n default: exit status 1 (63.68623ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-l9cxp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-882265 logs hello-node-75c85bcc94-l9cxp -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image load --daemon kicbase/echo-server:functional-882265 --alsologtostderr
E1205 06:13:23.427367   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-882265" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image load --daemon kicbase/echo-server:functional-882265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-882265" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-882265
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image load --daemon kicbase/echo-server:functional-882265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-882265" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image save kicbase/echo-server:functional-882265 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1205 06:13:26.842533   54307 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:26.842807   54307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:26.842816   54307 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:26.842820   54307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:26.842980   54307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:13:26.843465   54307 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:26.843551   54307 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:26.843918   54307 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
	I1205 06:13:26.861474   54307 ssh_runner.go:195] Run: systemctl --version
	I1205 06:13:26.861546   54307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
	I1205 06:13:26.877955   54307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
	I1205 06:13:26.973409   54307 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1205 06:13:26.973470   54307 cache_images.go:255] Failed to load cached images for "functional-882265": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1205 06:13:26.973498   54307 cache_images.go:267] failed pushing to: functional-882265

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-882265
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image save --daemon kicbase/echo-server:functional-882265 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-882265
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-882265: exit status 1 (16.081128ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-882265

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-882265

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 service --namespace=default --https --url hello-node: exit status 115 (520.341036ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32384
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-882265 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 service hello-node --url --format={{.IP}}: exit status 115 (522.772929ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-882265 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 service hello-node --url: exit status 115 (518.81548ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32384
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-882265 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32384
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-959058 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-959058 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-c67x5" [5c8c2929-b47a-4594-9c45-c4a1ef985b78] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959058 -n functional-959058
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-05 06:35:14.552264075 +0000 UTC m=+1830.936511431
functional_test.go:1645: (dbg) Run:  kubectl --context functional-959058 describe po hello-node-connect-9f67c86d4-c67x5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-959058 describe po hello-node-connect-9f67c86d4-c67x5 -n default:
Name:             hello-node-connect-9f67c86d4-c67x5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-959058/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:25:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vsfrj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vsfrj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-c67x5 to functional-959058
Normal   Pulling    7m16s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m16s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m16s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-959058 logs hello-node-connect-9f67c86d4-c67x5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-959058 logs hello-node-connect-9f67c86d4-c67x5 -n default: exit status 1 (64.646336ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-c67x5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-959058 logs hello-node-connect-9f67c86d4-c67x5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-959058 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-c67x5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-959058/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:25:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vsfrj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vsfrj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-c67x5 to functional-959058
Normal   Pulling    7m16s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m16s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m16s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-959058 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-959058 logs -l app=hello-node-connect: exit status 1 (57.359264ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-c67x5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-959058 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-959058 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.77.68
IPs:                      10.106.77.68
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31094/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-959058
helpers_test.go:243: (dbg) docker inspect functional-959058:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79",
	        "Created": "2025-12-05T06:23:22.878929511Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 62848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:23:22.91230871Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79/hosts",
	        "LogPath": "/var/lib/docker/containers/8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79/8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79-json.log",
	        "Name": "/functional-959058",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-959058:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-959058",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f39c45a2dc75113d617f564902198800109d67ba083f7d2329433487a254d79",
	                "LowerDir": "/var/lib/docker/overlay2/49e01f0113ecd87c7f55d3ba899d635bcecf839f59fd8cefc46f24efcf56bb3d-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e01f0113ecd87c7f55d3ba899d635bcecf839f59fd8cefc46f24efcf56bb3d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e01f0113ecd87c7f55d3ba899d635bcecf839f59fd8cefc46f24efcf56bb3d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e01f0113ecd87c7f55d3ba899d635bcecf839f59fd8cefc46f24efcf56bb3d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-959058",
	                "Source": "/var/lib/docker/volumes/functional-959058/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-959058",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-959058",
	                "name.minikube.sigs.k8s.io": "functional-959058",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0092ae7db256112ed769e04629f0ab6c0a8a5062b2ee92f871a49e858223d2cf",
	            "SandboxKey": "/var/run/docker/netns/0092ae7db256",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-959058": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdd4a69fb0af2f520c50f355b003934c258965477a3b10d008189e001b808eb0",
	                    "EndpointID": "c7578a80e694556d64b9e63743105fdbd89b24623463d881914af81dfe197562",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:55:c6:ee:11:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-959058",
	                        "8f39c45a2dc7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959058 -n functional-959058
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 logs -n 25: (1.166045507s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-959058 ssh -- ls -la /mount-9p                                                                                                     │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ ssh            │ functional-959058 ssh sudo umount -f /mount-9p                                                                                                │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ mount          │ -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount2 --alsologtostderr -v=1          │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ mount          │ -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount3 --alsologtostderr -v=1          │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ ssh            │ functional-959058 ssh findmnt -T /mount1                                                                                                      │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ mount          │ -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount1 --alsologtostderr -v=1          │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ ssh            │ functional-959058 ssh findmnt -T /mount1                                                                                                      │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ ssh            │ functional-959058 ssh findmnt -T /mount2                                                                                                      │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ ssh            │ functional-959058 ssh findmnt -T /mount3                                                                                                      │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ mount          │ -p functional-959058 --kill=true                                                                                                              │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ start          │ -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ start          │ -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ start          │ -p functional-959058 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-959058 --alsologtostderr -v=1                                                                                │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ license        │                                                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ update-context │ functional-959058 update-context --alsologtostderr -v=2                                                                                       │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ update-context │ functional-959058 update-context --alsologtostderr -v=2                                                                                       │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ update-context │ functional-959058 update-context --alsologtostderr -v=2                                                                                       │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ image          │ functional-959058 image ls --format short --alsologtostderr                                                                                   │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ image          │ functional-959058 image ls --format json --alsologtostderr                                                                                    │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ image          │ functional-959058 image ls --format table --alsologtostderr                                                                                   │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ image          │ functional-959058 image ls --format yaml --alsologtostderr                                                                                    │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ ssh            │ functional-959058 ssh pgrep buildkitd                                                                                                         │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │                     │
	│ image          │ functional-959058 image build -t localhost/my-image:functional-959058 testdata/build --alsologtostderr                                        │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	│ image          │ functional-959058 image ls                                                                                                                    │ functional-959058 │ jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:25 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:25:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:25:33.511922   79151 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:25:33.512141   79151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.512149   79151 out.go:374] Setting ErrFile to fd 2...
	I1205 06:25:33.512153   79151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.512335   79151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:25:33.512752   79151 out.go:368] Setting JSON to false
	I1205 06:25:33.513674   79151 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4077,"bootTime":1764911856,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:25:33.513726   79151 start.go:143] virtualization: kvm guest
	I1205 06:25:33.515353   79151 out.go:179] * [functional-959058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:25:33.516712   79151 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:25:33.516715   79151 notify.go:221] Checking for updates...
	I1205 06:25:33.517824   79151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:25:33.518997   79151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:25:33.520164   79151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:25:33.521339   79151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:25:33.522475   79151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:25:33.523858   79151 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:25:33.524406   79151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:25:33.546870   79151 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:25:33.547007   79151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:25:33.599001   79151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:25:33.590381533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:25:33.599098   79151 docker.go:319] overlay module found
	I1205 06:25:33.600529   79151 out.go:179] * Using the docker driver based on existing profile
	I1205 06:25:33.601497   79151 start.go:309] selected driver: docker
	I1205 06:25:33.601509   79151 start.go:927] validating driver "docker" against &{Name:functional-959058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959058 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:25:33.601591   79151 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:25:33.601666   79151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:25:33.652987   79151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:25:33.643015962 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:25:33.653719   79151 cni.go:84] Creating CNI manager for ""
	I1205 06:25:33.653790   79151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:25:33.653830   79151 start.go:353] cluster config:
	{Name:functional-959058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:25:33.655368   79151 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 05 06:25:39 functional-959058 crio[4619]: time="2025-12-05T06:25:39.428797638Z" level=info msg="Starting container: 2955a803f24cb09493b019b14c267b6316db5024ae1e087912dc302c6eb09ad5" id=9a3fc7e9-929d-4047-b228-b49666ffd557 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:25:39 functional-959058 crio[4619]: time="2025-12-05T06:25:39.430473455Z" level=info msg="Started container" PID=9090 containerID=2955a803f24cb09493b019b14c267b6316db5024ae1e087912dc302c6eb09ad5 description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zvw92/kubernetes-dashboard id=9a3fc7e9-929d-4047-b228-b49666ffd557 name=/runtime.v1.RuntimeService/StartContainer sandboxID=92468c9594438a17123c7be16a88a7858bf8c348a6452e6fb70ce84be9fc47f5
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.396406963Z" level=info msg="Stopping pod sandbox: c19bfa4550cc7113b3ed661f78595c7bacab3b7dadbb24563f112b79fa0c73d4" id=1e498589-8dae-41d5-9a1d-bc67acc93cdd name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.396463639Z" level=info msg="Stopped pod sandbox (already stopped): c19bfa4550cc7113b3ed661f78595c7bacab3b7dadbb24563f112b79fa0c73d4" id=1e498589-8dae-41d5-9a1d-bc67acc93cdd name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.396862204Z" level=info msg="Removing pod sandbox: c19bfa4550cc7113b3ed661f78595c7bacab3b7dadbb24563f112b79fa0c73d4" id=5cfb378e-614b-4782-b3c6-f7b6b4390b58 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.399463442Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.39953523Z" level=info msg="Removed pod sandbox: c19bfa4550cc7113b3ed661f78595c7bacab3b7dadbb24563f112b79fa0c73d4" id=5cfb378e-614b-4782-b3c6-f7b6b4390b58 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.399925501Z" level=info msg="Stopping pod sandbox: 58d7527e53b73c044cb12234faec85307f1649f46b0b9c22cc570355c8473cd5" id=422c0009-bbef-4521-9f43-a8cd86d2a13f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.399976369Z" level=info msg="Stopped pod sandbox (already stopped): 58d7527e53b73c044cb12234faec85307f1649f46b0b9c22cc570355c8473cd5" id=422c0009-bbef-4521-9f43-a8cd86d2a13f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.400216538Z" level=info msg="Removing pod sandbox: 58d7527e53b73c044cb12234faec85307f1649f46b0b9c22cc570355c8473cd5" id=7bb5f327-c208-4067-888a-deaad44e6094 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.402627774Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.402686311Z" level=info msg="Removed pod sandbox: 58d7527e53b73c044cb12234faec85307f1649f46b0b9c22cc570355c8473cd5" id=7bb5f327-c208-4067-888a-deaad44e6094 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.402997239Z" level=info msg="Stopping pod sandbox: 619908b779a5bc309967345a545d1740ffdd918f5ab056e1b7cfa08264bd0d0e" id=49ca099b-0ea3-4bbd-9f4c-4fc227b011f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.403029292Z" level=info msg="Stopped pod sandbox (already stopped): 619908b779a5bc309967345a545d1740ffdd918f5ab056e1b7cfa08264bd0d0e" id=49ca099b-0ea3-4bbd-9f4c-4fc227b011f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.403258843Z" level=info msg="Removing pod sandbox: 619908b779a5bc309967345a545d1740ffdd918f5ab056e1b7cfa08264bd0d0e" id=642e01ca-6efb-40b7-960f-d60b760cc7fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.405111576Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 06:25:46 functional-959058 crio[4619]: time="2025-12-05T06:25:46.405154936Z" level=info msg="Removed pod sandbox: 619908b779a5bc309967345a545d1740ffdd918f5ab056e1b7cfa08264bd0d0e" id=642e01ca-6efb-40b7-960f-d60b760cc7fb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:25:53 functional-959058 crio[4619]: time="2025-12-05T06:25:53.411700158Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d556ffe4-19c0-408f-b556-ba55d39e1268 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:26:06 functional-959058 crio[4619]: time="2025-12-05T06:26:06.412968762Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b3180627-97c3-442f-a759-ea27af9e3fe0 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:26:34 functional-959058 crio[4619]: time="2025-12-05T06:26:34.412480415Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a9fa0ca9-8028-4e66-bbd7-6b65baf6941f name=/runtime.v1.ImageService/PullImage
	Dec 05 06:26:55 functional-959058 crio[4619]: time="2025-12-05T06:26:55.4119522Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3e22c701-6126-4975-bdf7-f2343a3dfdb0 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:27:58 functional-959058 crio[4619]: time="2025-12-05T06:27:58.412193292Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=55694edf-ae66-4a7b-8c91-fff4b677874b name=/runtime.v1.ImageService/PullImage
	Dec 05 06:28:20 functional-959058 crio[4619]: time="2025-12-05T06:28:20.412683428Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=633e69da-9884-4bee-9cbf-37d25cd85b88 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:30:51 functional-959058 crio[4619]: time="2025-12-05T06:30:51.411744767Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3342b11e-efe9-4819-b429-33c4aad3730d name=/runtime.v1.ImageService/PullImage
	Dec 05 06:31:07 functional-959058 crio[4619]: time="2025-12-05T06:31:07.411657804Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0bccc524-9f66-4f10-aadd-f658a94b16c6 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2955a803f24cb       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   92468c9594438       kubernetes-dashboard-b84665fb8-zvw92         kubernetes-dashboard
	93dc24de4e402       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   31b89b0a61e26       dashboard-metrics-scraper-5565989548-8ntdr   kubernetes-dashboard
	9ad553fce8ee7       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   f318bb0246591       sp-pod                                       default
	07dabe11ffe0a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   e1ee7f7e6b5de       busybox-mount                                default
	017fc58cbb34d       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   01ea902d97b96       nginx-svc                                    default
	f3f9b54ab6f2b       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   ec4f067883645       mysql-844cf969f6-j8fqk                       default
	1cb6f2f8e5f9c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Running             kube-apiserver              0                   7f55e64a220d6       kube-apiserver-functional-959058             kube-system
	92728725961a3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Running             kube-controller-manager     2                   10151ff043c91       kube-controller-manager-functional-959058    kube-system
	e237ffa00f3c5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   15f1a30623427       etcd-functional-959058                       kube-system
	ea14014b91fc7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Exited              kube-controller-manager     1                   10151ff043c91       kube-controller-manager-functional-959058    kube-system
	f64123ab29e88       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 10 minutes ago      Running             kube-scheduler              1                   a9f41f12ee09f       kube-scheduler-functional-959058             kube-system
	bd313533ab3ee       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 10 minutes ago      Running             coredns                     1                   d8c5e0dcdd934       coredns-7d764666f9-8kbvw                     kube-system
	ccbf39faf4e4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   4f7a275a45064       storage-provisioner                          kube-system
	4f93509b515db       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 10 minutes ago      Running             kube-proxy                  1                   3ed6acea9e350       kube-proxy-qdwhq                             kube-system
	63fc146cae5b4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   649fdbe81c2bd       kindnet-7ptzc                                kube-system
	48e93d9d9ea01       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Exited              coredns                     0                   d8c5e0dcdd934       coredns-7d764666f9-8kbvw                     kube-system
	24929246e844e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   4f7a275a45064       storage-provisioner                          kube-system
	160c9292d28cc       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11               11 minutes ago      Exited              kindnet-cni                 0                   649fdbe81c2bd       kindnet-7ptzc                                kube-system
	825b08e12f98e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Exited              kube-proxy                  0                   3ed6acea9e350       kube-proxy-qdwhq                             kube-system
	40558e44f1b5e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Exited              etcd                        0                   15f1a30623427       etcd-functional-959058                       kube-system
	65f2f74b86f05       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 11 minutes ago      Exited              kube-scheduler              0                   a9f41f12ee09f       kube-scheduler-functional-959058             kube-system
	
	
	==> coredns [48e93d9d9ea017d2a5f7923d824b28e33e1f688bb39acf5e7d9c3d903b204b7c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44155 - 23552 "HINFO IN 3780393727833112043.7271317127003463464. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.46681667s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd313533ab3ee0f10e9a4c9d6f5917b9df25bafa59e0be204ba2ad3c0b79e333] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:40523 - 21720 "HINFO IN 7854985787579198483.1968467459117377205. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092208909s
	
	
	==> describe nodes <==
	Name:               functional-959058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-959058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=functional-959058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_23_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-959058
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:35:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:32:28 +0000   Fri, 05 Dec 2025 06:23:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:32:28 +0000   Fri, 05 Dec 2025 06:23:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:32:28 +0000   Fri, 05 Dec 2025 06:23:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:32:28 +0000   Fri, 05 Dec 2025 06:24:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-959058
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                c29f0549-e473-4ac5-87c8-5eb06d31959b
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-sxfrc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  default                     hello-node-connect-9f67c86d4-c67x5            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-j8fqk                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 coredns-7d764666f9-8kbvw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-959058                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-7ptzc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-959058              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-959058     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qdwhq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-959058              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8ntdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zvw92          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-959058 event: Registered Node functional-959058 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-959058 event: Registered Node functional-959058 in Controller
	
	
	==> dmesg <==
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 5 06:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.022771] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023920] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023880] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +2.047782] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +4.032580] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +8.063178] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[ +16.381345] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[Dec 5 06:08] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	
	
	==> etcd [40558e44f1b5edf157e5f8ab4b73e3839a344c2185b5da669e9e4f44cbafa96b] <==
	{"level":"warn","ts":"2025-12-05T06:23:42.018580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.036483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.039396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.045386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.051405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.057493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:23:42.101843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:24:27.832386Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-05T06:24:27.832461Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-959058","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-05T06:24:27.832558Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T06:24:34.834160Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T06:24:34.834258Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:24:34.834270Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-05T06:24:34.834345Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-05T06:24:34.834363Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-05T06:24:34.834355Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:24:34.834407Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:24:34.834404Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:24:34.834434Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T06:24:34.834447Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-05T06:24:34.834416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:24:34.836550Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-05T06:24:34.836613Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:24:34.836643Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-05T06:24:34.836652Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-959058","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e237ffa00f3c506af47d929a9318b624f86bd8164639a274b8502052afa5322f] <==
	{"level":"warn","ts":"2025-12-05T06:24:47.406301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.412424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.418533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.424375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.431344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.439105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.445905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.452942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.458922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.464881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.470875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.477142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.483103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.489173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.495568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.501291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.508200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.526848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.531871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.537936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.543882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:24:47.583095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:34:47.138389Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1156}
	{"level":"info","ts":"2025-12-05T06:34:47.158719Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1156,"took":"19.974877ms","hash":1501876711,"current-db-size-bytes":3485696,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-05T06:34:47.158763Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1501876711,"revision":1156,"compact-revision":-1}
	
	
	==> kernel <==
	 06:35:15 up  1:17,  0 user,  load average: 0.07, 0.16, 0.29
	Linux functional-959058 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [160c9292d28cc598b2954e3ed1025831e271358831d1c464984239f77562492b] <==
	I1205 06:23:52.780471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 06:23:52.780684       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1205 06:23:52.780809       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:23:52.780824       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:23:52.780843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:23:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:23:52.978456       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:23:52.978514       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:23:52.978534       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:23:52.978643       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 06:23:53.378880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:23:53.378902       1 metrics.go:72] Registering metrics
	I1205 06:23:53.378957       1 controller.go:711] "Syncing nftables rules"
	I1205 06:24:02.984206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:24:02.984272       1 main.go:301] handling current node
	I1205 06:24:12.984383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:24:12.984410       1 main.go:301] handling current node
	I1205 06:24:22.978429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:24:22.978461       1 main.go:301] handling current node
	
	
	==> kindnet [63fc146cae5b410794cb0b6b6db0d25fb5a643fb2a89bc34759e5f7b64057f2a] <==
	I1205 06:33:08.263209       1 main.go:301] handling current node
	I1205 06:33:18.269150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:33:18.269180       1 main.go:301] handling current node
	I1205 06:33:28.267696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:33:28.267733       1 main.go:301] handling current node
	I1205 06:33:38.264473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:33:38.264513       1 main.go:301] handling current node
	I1205 06:33:48.263695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:33:48.263724       1 main.go:301] handling current node
	I1205 06:33:58.267766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:33:58.267796       1 main.go:301] handling current node
	I1205 06:34:08.265516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:08.265545       1 main.go:301] handling current node
	I1205 06:34:18.268662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:18.268702       1 main.go:301] handling current node
	I1205 06:34:28.267389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:28.267428       1 main.go:301] handling current node
	I1205 06:34:38.265184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:38.265238       1 main.go:301] handling current node
	I1205 06:34:48.272255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:48.272292       1 main.go:301] handling current node
	I1205 06:34:58.265375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:34:58.265411       1 main.go:301] handling current node
	I1205 06:35:08.265228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:35:08.265259       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1cb6f2f8e5f9c7643ddba8acf00d1d6bd7ab2295d09993c0579c734fca52a326] <==
	I1205 06:24:48.052113       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 06:24:48.053105       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:24:48.454499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:24:48.924001       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1205 06:24:49.128672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1205 06:24:49.129702       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 06:24:49.133053       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:24:49.745499       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:24:49.827396       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:24:49.868285       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:24:49.873057       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:25:01.822008       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.213.110"}
	I1205 06:25:06.280975       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.243.119"}
	I1205 06:25:06.327341       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 06:25:08.890447       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.135.90"}
	I1205 06:25:14.232646       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.77.68"}
	E1205 06:25:20.418113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57524: use of closed network connection
	E1205 06:25:21.542876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57548: use of closed network connection
	I1205 06:25:21.661792       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.203.196"}
	E1205 06:25:24.690864       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57604: use of closed network connection
	E1205 06:25:33.755894       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36008: use of closed network connection
	I1205 06:25:34.519882       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 06:25:34.631293       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.243.44"}
	I1205 06:25:34.643764       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.203.118"}
	I1205 06:34:47.953900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [92728725961a3192a4f4138a3d32dfd2196dd317ad327597d8557db3c1e9541a] <==
	I1205 06:24:51.145146       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145294       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145418       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145430       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.144488       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145623       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145814       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1205 06:24:51.145896       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145987       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-959058"
	I1205 06:24:51.146059       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1205 06:24:51.146384       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.143280       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.145818       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.150309       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.152088       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:24:51.243062       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:51.243076       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 06:24:51.243080       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 06:24:51.253106       1 shared_informer.go:377] "Caches are synced"
	E1205 06:25:34.562598       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:25:34.568022       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:25:34.569298       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:25:34.575895       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:25:34.578012       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1205 06:25:34.581182       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ea14014b91fc7886cc639b4871844ff4de9e47e2883fb2c498bcd8051dce8b73] <==
	I1205 06:24:37.377124       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1205 06:24:37.377167       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1205 06:24:37.377190       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1205 06:24:37.377214       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1205 06:24:37.377241       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1205 06:24:37.377260       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1205 06:24:37.377315       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1205 06:24:37.377358       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1205 06:24:37.377377       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1205 06:24:37.377408       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1205 06:24:37.391255       1 controller_descriptor.go:99] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1205 06:24:37.391272       1 controllermanager.go:579] "Warning: skipping controller" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller"
	I1205 06:24:37.392879       1 controllermanager.go:627] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1205 06:24:37.630553       1 controller_descriptor.go:99] "Controller is disabled by a feature gate" controller="podcertificaterequest-cleaner-controller" requiredFeatureGates=["PodCertificateRequest"]
	I1205 06:24:37.630581       1 controllermanager.go:579] "Warning: skipping controller" controller="podcertificaterequest-cleaner-controller"
	I1205 06:24:37.630589       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1205 06:24:37.630594       1 controllermanager.go:579] "Warning: skipping controller" controller="service-lb-controller"
	I1205 06:24:37.630600       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I1205 06:24:37.630605       1 controllermanager.go:579] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1205 06:24:37.834226       1 controllermanager.go:579] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1205 06:24:37.930590       1 node_lifecycle_controller.go:419] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1205 06:24:37.930628       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="node-route-controller"
	I1205 06:24:37.930634       1 controllermanager.go:579] "Warning: skipping controller" controller="node-route-controller"
	E1205 06:24:38.128689       1 controllermanager.go:575] "Error initializing a controller" err="failed to create Kubernetes client for \"service-account-controller\": Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/service-account-controller\": dial tcp 192.168.49.2:8441: connect: connection refused" controller="serviceaccount-controller"
	E1205 06:24:38.128719       1 controllermanager.go:257] "Error building controllers" err="failed to create Kubernetes client for \"service-account-controller\": Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/service-account-controller\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [4f93509b515dbe1dd9034048b12de943584572803581b020870d74f4074d88de] <==
	I1205 06:24:27.977399       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:24:28.052961       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:24:36.953684       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:36.953731       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:24:36.953802       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:24:36.972561       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:24:36.972625       1 server_linux.go:136] "Using iptables Proxier"
	I1205 06:24:36.977821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:24:36.978071       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 06:24:36.978086       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:24:36.979289       1 config.go:200] "Starting service config controller"
	I1205 06:24:36.979406       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:24:36.979360       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:24:36.979478       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:24:36.979493       1 config.go:309] "Starting node config controller"
	I1205 06:24:36.979505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:24:36.979512       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:24:36.979522       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:24:36.979527       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:24:37.079896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:24:37.179910       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 06:24:37.880471       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [825b08e12f98e611f2ac49776d5d5a6237ec1ac66c0d3db370c37bfe58dcec88] <==
	I1205 06:23:50.906383       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:23:50.989297       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:23:51.090095       1 shared_informer.go:377] "Caches are synced"
	I1205 06:23:51.090126       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:23:51.090245       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:23:51.107836       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:23:51.107887       1 server_linux.go:136] "Using iptables Proxier"
	I1205 06:23:51.112828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:23:51.113140       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 06:23:51.113159       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:23:51.114745       1 config.go:200] "Starting service config controller"
	I1205 06:23:51.114766       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:23:51.114786       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:23:51.114813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:23:51.114838       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:23:51.114847       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:23:51.114928       1 config.go:309] "Starting node config controller"
	I1205 06:23:51.114947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:23:51.114954       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:23:51.215374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:23:51.215392       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 06:23:51.215418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [65f2f74b86f055d5acd0c7467306f0fac9627629c5b3a7ebbcd0be0c7ca8b46b] <==
	E1205 06:23:43.377721       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 06:23:43.378486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1205 06:23:43.402270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 06:23:43.403018       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1205 06:23:43.414842       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 06:23:43.415619       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1205 06:23:43.419460       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 06:23:43.420099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1205 06:23:43.523032       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 06:23:43.523818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1205 06:23:43.534845       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1205 06:23:43.535724       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1205 06:23:43.547605       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1205 06:23:43.548350       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1205 06:23:43.562175       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1205 06:23:43.563040       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1205 06:23:43.679306       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1205 06:23:43.680189       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1205 06:23:45.274818       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:34.941786       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 06:24:34.941786       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:24:34.942026       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1205 06:24:34.942051       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1205 06:24:34.942067       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1205 06:24:34.942087       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f64123ab29e88b0006f49e9cb9c16c60ed0abd6cd2b7e09b93f4d91b22438f5d] <==
	I1205 06:24:36.145281       1 serving.go:386] Generated self-signed cert in-memory
	I1205 06:24:36.612561       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1205 06:24:36.612587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:24:36.618141       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1205 06:24:36.618167       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:24:36.618179       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:24:36.618182       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:24:36.618190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 06:24:36.618211       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 06:24:36.618216       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:24:36.618671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 06:24:36.718978       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:36.719034       1 shared_informer.go:377] "Caches are synced"
	I1205 06:24:36.719059       1 shared_informer.go:377] "Caches are synced"
	E1205 06:24:47.930837       1 reflector.go:204] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	
	
	==> kubelet <==
	Dec 05 06:33:37 functional-959058 kubelet[5360]: E1205 06:33:37.411807    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:33:48 functional-959058 kubelet[5360]: E1205 06:33:48.412352    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:33:51 functional-959058 kubelet[5360]: E1205 06:33:51.411691    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:33:53 functional-959058 kubelet[5360]: E1205 06:33:53.411897    5360 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-959058" containerName="kube-controller-manager"
	Dec 05 06:33:53 functional-959058 kubelet[5360]: E1205 06:33:53.412042    5360 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-959058" containerName="etcd"
	Dec 05 06:33:58 functional-959058 kubelet[5360]: E1205 06:33:58.411630    5360 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-959058" containerName="kube-apiserver"
	Dec 05 06:34:01 functional-959058 kubelet[5360]: E1205 06:34:01.411856    5360 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-8kbvw" containerName="coredns"
	Dec 05 06:34:02 functional-959058 kubelet[5360]: E1205 06:34:02.411982    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:34:04 functional-959058 kubelet[5360]: E1205 06:34:04.411795    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:34:14 functional-959058 kubelet[5360]: E1205 06:34:14.413770    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:34:17 functional-959058 kubelet[5360]: E1205 06:34:17.411550    5360 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8ntdr" containerName="dashboard-metrics-scraper"
	Dec 05 06:34:18 functional-959058 kubelet[5360]: E1205 06:34:18.413945    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:34:23 functional-959058 kubelet[5360]: E1205 06:34:23.411271    5360 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zvw92" containerName="kubernetes-dashboard"
	Dec 05 06:34:25 functional-959058 kubelet[5360]: E1205 06:34:25.412041    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:34:31 functional-959058 kubelet[5360]: E1205 06:34:31.411713    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:34:36 functional-959058 kubelet[5360]: E1205 06:34:36.412125    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:34:44 functional-959058 kubelet[5360]: E1205 06:34:44.412263    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:34:47 functional-959058 kubelet[5360]: E1205 06:34:47.411469    5360 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-959058" containerName="kube-scheduler"
	Dec 05 06:34:51 functional-959058 kubelet[5360]: E1205 06:34:51.411540    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:34:57 functional-959058 kubelet[5360]: E1205 06:34:57.412148    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:34:59 functional-959058 kubelet[5360]: E1205 06:34:59.411709    5360 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-959058" containerName="etcd"
	Dec 05 06:35:02 functional-959058 kubelet[5360]: E1205 06:35:02.411967    5360 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-8kbvw" containerName="coredns"
	Dec 05 06:35:03 functional-959058 kubelet[5360]: E1205 06:35:03.411611    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	Dec 05 06:35:08 functional-959058 kubelet[5360]: E1205 06:35:08.411918    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-c67x5" podUID="5c8c2929-b47a-4594-9c45-c4a1ef985b78"
	Dec 05 06:35:14 functional-959058 kubelet[5360]: E1205 06:35:14.411903    5360 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-sxfrc" podUID="23409b81-dc02-44c5-8dfa-720055c52941"
	
	
	==> kubernetes-dashboard [2955a803f24cb09493b019b14c267b6316db5024ae1e087912dc302c6eb09ad5] <==
	2025/12/05 06:25:39 Using namespace: kubernetes-dashboard
	2025/12/05 06:25:39 Using in-cluster config to connect to apiserver
	2025/12/05 06:25:39 Using secret token for csrf signing
	2025/12/05 06:25:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 06:25:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 06:25:39 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/05 06:25:39 Generating JWE encryption key
	2025/12/05 06:25:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 06:25:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 06:25:39 Initializing JWE encryption key from synchronized object
	2025/12/05 06:25:39 Creating in-cluster Sidecar client
	2025/12/05 06:25:39 Successful request to sidecar
	2025/12/05 06:25:39 Serving insecurely on HTTP port: 9090
	2025/12/05 06:25:39 Starting overwatch
	
	
	==> storage-provisioner [24929246e844ec134f0e201bb2773598db55d26744dc7bbab57402fe08439fec] <==
	W1205 06:24:03.520344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:03.523293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 06:24:03.619019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-959058_71983ef5-0161-4790-823c-724913d791d0!
	W1205 06:24:05.526816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:05.530346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:07.532750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:07.536639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:09.540249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:09.547304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:11.549722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:11.553125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:13.556592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:13.560606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:15.563957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:15.567386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:17.570582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:17.574559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:19.578181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:19.581698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:21.585046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:21.589821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:23.592825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:23.596526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:25.599179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:24:25.602694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ccbf39faf4e4cbb506be094004f482b094c2d753836c8a63657a4b394480fcc2] <==
	W1205 06:34:52.192948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:54.195850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:54.200604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:56.203312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:56.206978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:58.209878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:34:58.213562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:00.216659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:00.220290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:02.223091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:02.227621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:04.230532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:04.234129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:06.236938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:06.240770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:08.243216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:08.247597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:10.250225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:10.253597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:12.256139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:12.259673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:14.263127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:14.267223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:16.269740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:35:16.273230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959058 -n functional-959058
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-sxfrc hello-node-connect-9f67c86d4-c67x5
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-959058 describe pod busybox-mount hello-node-5758569b79-sxfrc hello-node-connect-9f67c86d4-c67x5
helpers_test.go:290: (dbg) kubectl --context functional-959058 describe pod busybox-mount hello-node-5758569b79-sxfrc hello-node-connect-9f67c86d4-c67x5:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-959058/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:25:24 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://07dabe11ffe0a0fc08227b36be0ba7e9abd1444d629feed87a701809c409e517
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 05 Dec 2025 06:25:25 +0000
	      Finished:     Fri, 05 Dec 2025 06:25:25 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zp285 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zp285:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-959058
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 649ms (649ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Container created
	  Normal  Started    9m51s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-sxfrc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-959058/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:25:21 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d49px (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d49px:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m55s                   default-scheduler  Successfully assigned default/hello-node-5758569b79-sxfrc to functional-959058
	  Normal   Pulling    6m56s (x5 over 9m55s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 9m55s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 9m54s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-c67x5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-959058/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:25:14 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vsfrj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vsfrj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-c67x5 to functional-959058
	  Normal   Pulling    7m18s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image load --daemon kicbase/echo-server:functional-959058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-959058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image load --daemon kicbase/echo-server:functional-959058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-959058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (4.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-959058
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image load --daemon kicbase/echo-server:functional-959058 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 image load --daemon kicbase/echo-server:functional-959058 --alsologtostderr: (1.570310854s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 image ls: (2.364925178s)
functional_test.go:461: expected "kicbase/echo-server:functional-959058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (4.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image save kicbase/echo-server:functional-959058 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1205 06:25:13.373618   74890 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:25:13.373883   74890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:13.373893   74890 out.go:374] Setting ErrFile to fd 2...
	I1205 06:25:13.373897   74890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:13.374092   74890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:25:13.374634   74890 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:25:13.374727   74890 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:25:13.375121   74890 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
	I1205 06:25:13.393487   74890 ssh_runner.go:195] Run: systemctl --version
	I1205 06:25:13.393537   74890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
	I1205 06:25:13.411917   74890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
	I1205 06:25:13.512495   74890 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1205 06:25:13.512574   74890 cache_images.go:255] Failed to load cached images for "functional-959058": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1205 06:25:13.512616   74890 cache_images.go:267] failed pushing to: functional-959058

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-959058
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image save --daemon kicbase/echo-server:functional-959058 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-959058
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-959058: exit status 1 (18.855126ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-959058

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-959058

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-959058 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-959058 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-sxfrc" [23409b81-dc02-44c5-8dfa-720055c52941] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959058 -n functional-959058
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-05 06:35:21.968236886 +0000 UTC m=+1838.352484233
functional_test.go:1460: (dbg) Run:  kubectl --context functional-959058 describe po hello-node-5758569b79-sxfrc -n default
functional_test.go:1460: (dbg) kubectl --context functional-959058 describe po hello-node-5758569b79-sxfrc -n default:
Name:             hello-node-5758569b79-sxfrc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-959058/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:25:21 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d49px (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d49px:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-sxfrc to functional-959058
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-959058 logs hello-node-5758569b79-sxfrc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-959058 logs hello-node-5758569b79-sxfrc -n default: exit status 1 (58.832089ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-sxfrc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-959058 logs hello-node-5758569b79-sxfrc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 service --namespace=default --https --url hello-node: exit status 115 (523.535785ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31766
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-959058 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 service hello-node --url --format={{.IP}}: exit status 115 (521.219549ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-959058 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 service hello-node --url: exit status 115 (523.240872ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31766
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-959058 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31766
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-101450 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-101450 --output=json --user=testUser: exit status 80 (2.318651938s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"348b51ad-a94a-4fbe-9be0-abcdd89ea020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-101450 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"34307edc-aaf6-43c9-9a4e-28286d28e720","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-05T06:44:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"fa4aaeff-551d-4535-981c-9d06e82ba480","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-101450 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-101450 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-101450 --output=json --user=testUser: exit status 80 (2.227104797s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7369a341-662b-498c-80b1-0bea35df0b73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-101450 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"3af59fa5-1a23-4ecc-bb38-56db9bd93bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-05T06:44:17Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"20a5c0ca-e817-45bf-9e31-09f5169f447f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-101450 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.23s)

                                                
                                    
x
+
TestPause/serial/Pause (5.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-355053 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-355053 --alsologtostderr -v=5: exit status 80 (1.992561051s)

                                                
                                                
-- stdout --
	* Pausing node pause-355053 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:56:54.508559  224945 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:56:54.508657  224945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:54.508665  224945 out.go:374] Setting ErrFile to fd 2...
	I1205 06:56:54.508669  224945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:54.508871  224945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:56:54.509122  224945 out.go:368] Setting JSON to false
	I1205 06:56:54.509139  224945 mustload.go:66] Loading cluster: pause-355053
	I1205 06:56:54.509490  224945 config.go:182] Loaded profile config "pause-355053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:56:54.509888  224945 cli_runner.go:164] Run: docker container inspect pause-355053 --format={{.State.Status}}
	I1205 06:56:54.527481  224945 host.go:66] Checking if "pause-355053" exists ...
	I1205 06:56:54.527779  224945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:56:54.586975  224945 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-05 06:56:54.576107379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:56:54.587870  224945 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-355053 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 06:56:54.590731  224945 out.go:179] * Pausing node pause-355053 ... 
	I1205 06:56:54.592133  224945 host.go:66] Checking if "pause-355053" exists ...
	I1205 06:56:54.592473  224945 ssh_runner.go:195] Run: systemctl --version
	I1205 06:56:54.592531  224945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-355053
	I1205 06:56:54.618121  224945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/pause-355053/id_rsa Username:docker}
	I1205 06:56:54.719004  224945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:56:54.730882  224945 pause.go:52] kubelet running: true
	I1205 06:56:54.730954  224945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 06:56:54.898219  224945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 06:56:54.898350  224945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 06:56:54.982376  224945 cri.go:89] found id: "a6c753f00973061f2a04aaf7cc5307f88037c9bd9cea9c943593bf2deae5ed9d"
	I1205 06:56:54.982404  224945 cri.go:89] found id: "ef0e38429e2a72947841ee46cd4e3c082cc8d45a2fd25fa53b8b873b3d945b73"
	I1205 06:56:54.982411  224945 cri.go:89] found id: "3c008f56b26231ab9f01e1d4637584edad703e68d28681c0ac32f9b909630a52"
	I1205 06:56:54.982417  224945 cri.go:89] found id: "e01dd197330ac7e05c3d4538a85363df4fc90854af178e996b184e8e5380031e"
	I1205 06:56:54.982482  224945 cri.go:89] found id: "c90e9778f2894bc931fddacdbc550c61db02951ba5cce6883f1354b609119233"
	I1205 06:56:54.982534  224945 cri.go:89] found id: "f251d470cd6736b4a6277b54f6a84799f3538e97b81705f7032e91c72f54a815"
	I1205 06:56:54.982542  224945 cri.go:89] found id: "e1f1cbcf13622926c9b9df1bdc1f1946f032c304a99f73db3aa2d792f98f94e7"
	I1205 06:56:54.982545  224945 cri.go:89] found id: ""
	I1205 06:56:54.982630  224945 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:56:54.995721  224945 retry.go:31] will retry after 343.54473ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:56:54Z" level=error msg="open /run/runc: no such file or directory"
	I1205 06:56:55.340335  224945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:56:55.357830  224945 pause.go:52] kubelet running: false
	I1205 06:56:55.357909  224945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 06:56:55.528823  224945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 06:56:55.528992  224945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 06:56:55.620804  224945 cri.go:89] found id: "a6c753f00973061f2a04aaf7cc5307f88037c9bd9cea9c943593bf2deae5ed9d"
	I1205 06:56:55.620919  224945 cri.go:89] found id: "ef0e38429e2a72947841ee46cd4e3c082cc8d45a2fd25fa53b8b873b3d945b73"
	I1205 06:56:55.620928  224945 cri.go:89] found id: "3c008f56b26231ab9f01e1d4637584edad703e68d28681c0ac32f9b909630a52"
	I1205 06:56:55.620932  224945 cri.go:89] found id: "e01dd197330ac7e05c3d4538a85363df4fc90854af178e996b184e8e5380031e"
	I1205 06:56:55.620936  224945 cri.go:89] found id: "c90e9778f2894bc931fddacdbc550c61db02951ba5cce6883f1354b609119233"
	I1205 06:56:55.620940  224945 cri.go:89] found id: "f251d470cd6736b4a6277b54f6a84799f3538e97b81705f7032e91c72f54a815"
	I1205 06:56:55.620945  224945 cri.go:89] found id: "e1f1cbcf13622926c9b9df1bdc1f1946f032c304a99f73db3aa2d792f98f94e7"
	I1205 06:56:55.620949  224945 cri.go:89] found id: ""
	I1205 06:56:55.621006  224945 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:56:55.637708  224945 retry.go:31] will retry after 545.478192ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:56:55Z" level=error msg="open /run/runc: no such file or directory"
	I1205 06:56:56.183426  224945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:56:56.200428  224945 pause.go:52] kubelet running: false
	I1205 06:56:56.200491  224945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 06:56:56.330440  224945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 06:56:56.330540  224945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 06:56:56.414510  224945 cri.go:89] found id: "a6c753f00973061f2a04aaf7cc5307f88037c9bd9cea9c943593bf2deae5ed9d"
	I1205 06:56:56.414534  224945 cri.go:89] found id: "ef0e38429e2a72947841ee46cd4e3c082cc8d45a2fd25fa53b8b873b3d945b73"
	I1205 06:56:56.414541  224945 cri.go:89] found id: "3c008f56b26231ab9f01e1d4637584edad703e68d28681c0ac32f9b909630a52"
	I1205 06:56:56.414546  224945 cri.go:89] found id: "e01dd197330ac7e05c3d4538a85363df4fc90854af178e996b184e8e5380031e"
	I1205 06:56:56.414551  224945 cri.go:89] found id: "c90e9778f2894bc931fddacdbc550c61db02951ba5cce6883f1354b609119233"
	I1205 06:56:56.414555  224945 cri.go:89] found id: "f251d470cd6736b4a6277b54f6a84799f3538e97b81705f7032e91c72f54a815"
	I1205 06:56:56.414559  224945 cri.go:89] found id: "e1f1cbcf13622926c9b9df1bdc1f1946f032c304a99f73db3aa2d792f98f94e7"
	I1205 06:56:56.414562  224945 cri.go:89] found id: ""
	I1205 06:56:56.414621  224945 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:56:56.432159  224945 out.go:203] 
	W1205 06:56:56.433270  224945 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:56:56.433297  224945 out.go:285] * 
	* 
	W1205 06:56:56.440002  224945 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:56:56.441337  224945 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-355053 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-355053
helpers_test.go:243: (dbg) docker inspect pause-355053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5",
	        "Created": "2025-12-05T06:56:04.829859514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:56:04.887387976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/hosts",
	        "LogPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5-json.log",
	        "Name": "/pause-355053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-355053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-355053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5",
	                "LowerDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-355053",
	                "Source": "/var/lib/docker/volumes/pause-355053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-355053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-355053",
	                "name.minikube.sigs.k8s.io": "pause-355053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6726555be8e123bc087bf48f0b39dda0074ec0660e6601b6d2a55d8af8f0f245",
	            "SandboxKey": "/var/run/docker/netns/6726555be8e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-355053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0de155e2cf123b4ecfcc2a29a2512261f8846c4622fa408c7641fcd8ec8c9a1c",
	                    "EndpointID": "d46d86ef3ca29df93f4edf4a36c4ff7fd36f1ca2ebb5f24f4de6e308b578aa69",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ea:30:b2:a4:92:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-355053",
	                        "2731ca73fa54"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-355053 -n pause-355053
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-355053 -n pause-355053: exit status 2 (396.510008ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-355053 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-644744 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --cancel-scheduled                                                                           │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │ 05 Dec 25 06:54 UTC │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │ 05 Dec 25 06:55 UTC │
	│ delete  │ -p scheduled-stop-644744                                                                                              │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:55 UTC │
	│ start   │ -p insufficient-storage-238325 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-238325 │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │                     │
	│ delete  │ -p insufficient-storage-238325                                                                                        │ insufficient-storage-238325 │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:55 UTC │
	│ start   │ -p pause-355053 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p offline-crio-314280 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-314280         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │                     │
	│ start   │ -p force-systemd-env-435873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-435873    │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p NoKubernetes-385989 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ delete  │ -p force-systemd-env-435873                                                                                           │ force-systemd-env-435873    │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p stopped-upgrade-515128 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-515128      │ jenkins │ v1.35.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ delete  │ -p offline-crio-314280                                                                                                │ offline-crio-314280         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p pause-355053 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p missing-upgrade-044081 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-044081      │ jenkins │ v1.35.0 │ 05 Dec 25 06:56 UTC │                     │
	│ delete  │ -p NoKubernetes-385989                                                                                                │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ pause   │ -p pause-355053 --alsologtostderr -v=5                                                                                │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:56:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:56:55.156927  225434 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:56:55.157171  225434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:55.157180  225434 out.go:374] Setting ErrFile to fd 2...
	I1205 06:56:55.157185  225434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:55.157457  225434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:56:55.158011  225434 out.go:368] Setting JSON to false
	I1205 06:56:55.159136  225434 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5959,"bootTime":1764911856,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:56:55.159185  225434 start.go:143] virtualization: kvm guest
	I1205 06:56:55.161755  225434 out.go:179] * [NoKubernetes-385989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:56:55.162913  225434 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:56:55.162964  225434 notify.go:221] Checking for updates...
	I1205 06:56:55.165140  225434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:56:55.166591  225434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:56:55.167676  225434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:56:55.168618  225434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:56:55.170278  225434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:56:55.172108  225434 config.go:182] Loaded profile config "missing-upgrade-044081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:56:55.172316  225434 config.go:182] Loaded profile config "pause-355053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:56:55.172496  225434 config.go:182] Loaded profile config "stopped-upgrade-515128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:56:55.172534  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.172678  225434 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:56:55.202233  225434 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:56:55.202403  225434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:56:55.270438  225434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-05 06:56:55.259673317 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:56:55.270592  225434 docker.go:319] overlay module found
	I1205 06:56:55.272127  225434 out.go:179] * Using the docker driver based on user configuration
	I1205 06:56:55.273192  225434 start.go:309] selected driver: docker
	I1205 06:56:55.273211  225434 start.go:927] validating driver "docker" against <nil>
	I1205 06:56:55.273225  225434 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:56:55.273976  225434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:56:55.339016  225434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-05 06:56:55.327804288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:56:55.339146  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.339220  225434 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:56:55.339445  225434 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:56:55.343441  225434 out.go:179] * Using Docker driver with root privileges
	I1205 06:56:55.344514  225434 cni.go:84] Creating CNI manager for ""
	I1205 06:56:55.344590  225434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:56:55.344605  225434 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:56:55.344649  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.344702  225434 start.go:353] cluster config:
	{Name:NoKubernetes-385989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-385989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:56:55.345911  225434 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-385989
	I1205 06:56:55.347520  225434 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:56:55.348812  225434 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:56:55.349888  225434 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1205 06:56:55.350025  225434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/NoKubernetes-385989/config.json ...
	I1205 06:56:55.350059  225434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/NoKubernetes-385989/config.json: {Name:mkd94856f1134c9f4d9fb1f628c7b01ee6b0df19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:55.350190  225434 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:56:55.380432  225434 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:56:55.380455  225434 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 06:56:55.380474  225434 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:56:55.380505  225434 start.go:360] acquireMachinesLock for NoKubernetes-385989: {Name:mk3f2d7fbe0b75327c4a414bb071365bd5df1b84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:56:55.380578  225434 start.go:364] duration metric: took 52.526µs to acquireMachinesLock for "NoKubernetes-385989"
	I1205 06:56:55.380601  225434 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-385989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-385989 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:56:55.380692  225434 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.745423553Z" level=info msg="RDT not available in the host system"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.745432706Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746189603Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746212126Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746227731Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.747027598Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.747044844Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.755685447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.755741488Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.756468855Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.757073252Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.757190906Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842108137Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-m48hg Namespace:kube-system ID:f129ef27e93f1595ba2fbd4c99a3a73d14a0d49ec1a1d490741fc93c8959a1d0 UID:08bbbed9-1fb4-4963-8c64-32ddd6f85a1e NetNS:/var/run/netns/c04145e5-63c8-4255-b5da-a698fdcbc0f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000524368}] Aliases:map[]}"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842376501Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-m48hg for CNI network kindnet (type=ptp)"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842921804Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842944701Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842995343Z" level=info msg="Create NRI interface"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843105804Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843115543Z" level=info msg="runtime interface created"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843127394Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843135284Z" level=info msg="runtime interface starting up..."
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843143691Z" level=info msg="starting plugins..."
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843157641Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.84353969Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:56:50 pause-355053 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a6c753f009730       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago      Running             coredns                   0                   f129ef27e93f1       coredns-66bc5c9577-m48hg               kube-system
	ef0e38429e2a7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   30 seconds ago      Running             kube-proxy                0                   5d453405fdb1c       kube-proxy-kqmhr                       kube-system
	3c008f56b2623       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   30 seconds ago      Running             kindnet-cni               0                   a5ec9875a958a       kindnet-5nfzr                          kube-system
	e01dd197330ac       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   41 seconds ago      Running             kube-apiserver            0                   af2ec9a58d330       kube-apiserver-pause-355053            kube-system
	c90e9778f2894       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   41 seconds ago      Running             kube-controller-manager   0                   5838826cd2b43       kube-controller-manager-pause-355053   kube-system
	f251d470cd673       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   41 seconds ago      Running             kube-scheduler            0                   a25829e60892e       kube-scheduler-pause-355053            kube-system
	e1f1cbcf13622       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   41 seconds ago      Running             etcd                      0                   37e5ad88a71da       etcd-pause-355053                      kube-system
	
	
	==> coredns [a6c753f00973061f2a04aaf7cc5307f88037c9bd9cea9c943593bf2deae5ed9d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52722 - 41859 "HINFO IN 7818207068985842571.8708193642429815693. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099113008s
	
	
	==> describe nodes <==
	Name:               pause-355053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-355053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=pause-355053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_56_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:56:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-355053
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:56:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-355053
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                142985db-c011-4522-89dc-7a9bfef099f7
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-m48hg                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-pause-355053                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-5nfzr                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-pause-355053             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-355053    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-kqmhr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-pause-355053             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node pause-355053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node pause-355053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node pause-355053 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node pause-355053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node pause-355053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node pause-355053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node pause-355053 event: Registered Node pause-355053 in Controller
	  Normal  NodeReady                20s                kubelet          Node pause-355053 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 5 06:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.022771] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023920] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023880] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +2.047782] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +4.032580] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +8.063178] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[ +16.381345] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[Dec 5 06:08] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	
	
	==> etcd [e1f1cbcf13622926c9b9df1bdc1f1946f032c304a99f73db3aa2d792f98f94e7] <==
	{"level":"warn","ts":"2025-12-05T06:56:18.171948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.184963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.191080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.200097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.209823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.219610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.230494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.245097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.251844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.267803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.276128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.283017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.341916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:56:29.923546Z","caller":"traceutil/trace.go:172","msg":"trace[1308554962] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"139.477606ms","start":"2025-12-05T06:56:29.784047Z","end":"2025-12-05T06:56:29.923525Z","steps":["trace[1308554962] 'process raft request'  (duration: 137.384366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:30.899762Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.798311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-05T06:56:30.899943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.485031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-355053\" limit:1 ","response":"range_response_count:1 size:5986"}
	{"level":"info","ts":"2025-12-05T06:56:30.899982Z","caller":"traceutil/trace.go:172","msg":"trace[840456427] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-355053; range_end:; response_count:1; response_revision:416; }","duration":"202.517917ms","start":"2025-12-05T06:56:30.697452Z","end":"2025-12-05T06:56:30.899970Z","steps":["trace[840456427] 'range keys from in-memory index tree'  (duration: 202.295172ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:56:30.899988Z","caller":"traceutil/trace.go:172","msg":"trace[1153837512] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"202.045557ms","start":"2025-12-05T06:56:30.697906Z","end":"2025-12-05T06:56:30.899952Z","steps":["trace[1153837512] 'range keys from in-memory index tree'  (duration: 201.664505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:30.899846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.553269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-355053\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-12-05T06:56:30.900269Z","caller":"traceutil/trace.go:172","msg":"trace[863780581] range","detail":"{range_begin:/registry/minions/pause-355053; range_end:; response_count:1; response_revision:416; }","duration":"227.975306ms","start":"2025-12-05T06:56:30.672264Z","end":"2025-12-05T06:56:30.900240Z","steps":["trace[863780581] 'range keys from in-memory index tree'  (duration: 227.366048ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:56:31.050666Z","caller":"traceutil/trace.go:172","msg":"trace[1085688299] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"140.247011ms","start":"2025-12-05T06:56:30.910399Z","end":"2025-12-05T06:56:31.050646Z","steps":["trace[1085688299] 'process raft request'  (duration: 140.133758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:31.330061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.614268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-355053\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-12-05T06:56:31.330140Z","caller":"traceutil/trace.go:172","msg":"trace[1413122416] range","detail":"{range_begin:/registry/minions/pause-355053; range_end:; response_count:1; response_revision:417; }","duration":"158.700914ms","start":"2025-12-05T06:56:31.171419Z","end":"2025-12-05T06:56:31.330120Z","steps":["trace[1413122416] 'range keys from in-memory index tree'  (duration: 158.44578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:42.744959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.333231ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:56:42.745033Z","caller":"traceutil/trace.go:172","msg":"trace[1748903115] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:439; }","duration":"343.42378ms","start":"2025-12-05T06:56:42.401594Z","end":"2025-12-05T06:56:42.745018Z","steps":["trace[1748903115] 'range keys from in-memory index tree'  (duration: 343.232918ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:56:57 up  1:39,  0 user,  load average: 3.54, 1.79, 1.22
	Linux pause-355053 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c008f56b26231ab9f01e1d4637584edad703e68d28681c0ac32f9b909630a52] <==
	I1205 06:56:27.298223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 06:56:27.334564       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1205 06:56:27.334727       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:56:27.334752       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:56:27.334780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:56:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:56:27.636372       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:56:27.636413       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:56:27.636427       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:56:27.636572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 06:56:28.037408       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:56:28.037441       1 metrics.go:72] Registering metrics
	I1205 06:56:28.037494       1 controller.go:711] "Syncing nftables rules"
	I1205 06:56:37.600717       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:37.600784       1 main.go:301] handling current node
	I1205 06:56:47.599348       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:47.599404       1 main.go:301] handling current node
	I1205 06:56:57.600434       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:57.600484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e01dd197330ac7e05c3d4538a85363df4fc90854af178e996b184e8e5380031e] <==
	I1205 06:56:18.959655       1 policy_source.go:240] refreshing policies
	E1205 06:56:18.992991       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1205 06:56:19.040226       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 06:56:19.043491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:19.043639       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1205 06:56:19.061234       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:19.061917       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 06:56:19.162002       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 06:56:19.840237       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 06:56:19.845723       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 06:56:19.845744       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 06:56:20.416055       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:56:20.450434       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:56:20.541729       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 06:56:20.548395       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1205 06:56:20.549419       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 06:56:20.554078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:56:20.886007       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:56:21.765296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:56:21.779355       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 06:56:21.789381       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:56:26.688796       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1205 06:56:26.843984       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:26.852376       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:26.911801       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c90e9778f2894bc931fddacdbc550c61db02951ba5cce6883f1354b609119233] <==
	I1205 06:56:25.884468       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 06:56:25.885036       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:56:25.885165       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 06:56:25.885426       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:56:25.885462       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 06:56:25.885552       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 06:56:25.885746       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:56:25.886003       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 06:56:25.885180       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:56:25.886026       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 06:56:25.886580       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 06:56:25.888500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:56:25.889066       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:56:25.889560       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:56:25.889626       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:56:25.889859       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:56:25.890124       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:56:25.890137       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:56:25.891678       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:56:25.895483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:56:25.898822       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-355053" podCIDRs=["10.244.0.0/24"]
	I1205 06:56:25.904088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:56:25.904985       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1205 06:56:25.908881       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:56:40.835087       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ef0e38429e2a72947841ee46cd4e3c082cc8d45a2fd25fa53b8b873b3d945b73] <==
	I1205 06:56:27.156885       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:56:27.218272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:56:27.318395       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:56:27.318449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1205 06:56:27.318588       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:56:27.341659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:56:27.341823       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:56:27.349540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:56:27.349955       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:56:27.350018       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:56:27.351524       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:56:27.351544       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:56:27.351632       1 config.go:200] "Starting service config controller"
	I1205 06:56:27.351644       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:56:27.351774       1 config.go:309] "Starting node config controller"
	I1205 06:56:27.351789       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:56:27.351797       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:56:27.351814       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:56:27.351828       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:56:27.451994       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:56:27.452023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:56:27.452019       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f251d470cd6736b4a6277b54f6a84799f3538e97b81705f7032e91c72f54a815] <==
	E1205 06:56:18.947719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:56:18.947723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:56:18.947837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:56:18.947890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:56:18.947908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:56:18.947997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:56:18.947995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:56:18.948090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:56:18.948111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:56:18.950402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:56:18.950868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:56:18.951367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:56:18.951368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:56:18.951486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:56:19.807318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:56:19.828586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:56:19.850067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:56:19.866981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 06:56:19.911088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:56:19.934248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:56:20.000624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:56:20.046487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:56:20.088793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:56:20.126348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1205 06:56:22.931991       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:56:46 pause-355053 kubelet[1328]: E1205 06:56:46.722447    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:46 pause-355053 kubelet[1328]: E1205 06:56:46.722460    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: W1205 06:56:48.726536    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726634    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726675    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726688    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: W1205 06:56:48.827783    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.003741    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.227447    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661741    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661811    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661828    1328 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661839    1328 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.706550    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727858    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727913    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727927    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:50 pause-355053 kubelet[1328]: W1205 06:56:50.466976    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728674    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728757    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728776    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:54 pause-355053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 06:56:54 pause-355053 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 06:56:54 pause-355053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:54 pause-355053 systemd[1]: kubelet.service: Consumed 1.317s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355053 -n pause-355053
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355053 -n pause-355053: exit status 2 (368.603778ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-355053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-355053
helpers_test.go:243: (dbg) docker inspect pause-355053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5",
	        "Created": "2025-12-05T06:56:04.829859514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:56:04.887387976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/hosts",
	        "LogPath": "/var/lib/docker/containers/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5/2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5-json.log",
	        "Name": "/pause-355053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-355053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-355053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2731ca73fa549561119c7c66425f6cf2fc5b4bbf1a843f8da51eda4340a6e1f5",
	                "LowerDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a54a68140d04b4586aea14d91c445a9261b9965485d7c7451009f44c970156e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-355053",
	                "Source": "/var/lib/docker/volumes/pause-355053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-355053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-355053",
	                "name.minikube.sigs.k8s.io": "pause-355053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6726555be8e123bc087bf48f0b39dda0074ec0660e6601b6d2a55d8af8f0f245",
	            "SandboxKey": "/var/run/docker/netns/6726555be8e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-355053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0de155e2cf123b4ecfcc2a29a2512261f8846c4622fa408c7641fcd8ec8c9a1c",
	                    "EndpointID": "d46d86ef3ca29df93f4edf4a36c4ff7fd36f1ca2ebb5f24f4de6e308b578aa69",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ea:30:b2:a4:92:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-355053",
	                        "2731ca73fa54"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-355053 -n pause-355053
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-355053 -n pause-355053: exit status 2 (357.367521ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-355053 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-355053 logs -n 25: (1.035205148s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-644744 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --cancel-scheduled                                                                           │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │ 05 Dec 25 06:54 UTC │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │                     │
	│ stop    │ -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:54 UTC │ 05 Dec 25 06:55 UTC │
	│ delete  │ -p scheduled-stop-644744                                                                                              │ scheduled-stop-644744       │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:55 UTC │
	│ start   │ -p insufficient-storage-238325 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-238325 │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │                     │
	│ delete  │ -p insufficient-storage-238325                                                                                        │ insufficient-storage-238325 │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:55 UTC │
	│ start   │ -p pause-355053 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p offline-crio-314280 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-314280         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │                     │
	│ start   │ -p force-systemd-env-435873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-435873    │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p NoKubernetes-385989 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ delete  │ -p force-systemd-env-435873                                                                                           │ force-systemd-env-435873    │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p stopped-upgrade-515128 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-515128      │ jenkins │ v1.35.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ delete  │ -p offline-crio-314280                                                                                                │ offline-crio-314280         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p pause-355053 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ start   │ -p missing-upgrade-044081 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-044081      │ jenkins │ v1.35.0 │ 05 Dec 25 06:56 UTC │                     │
	│ delete  │ -p NoKubernetes-385989                                                                                                │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:56 UTC │
	│ pause   │ -p pause-355053 --alsologtostderr -v=5                                                                                │ pause-355053                │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-385989         │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:56:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:56:55.156927  225434 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:56:55.157171  225434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:55.157180  225434 out.go:374] Setting ErrFile to fd 2...
	I1205 06:56:55.157185  225434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:56:55.157457  225434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:56:55.158011  225434 out.go:368] Setting JSON to false
	I1205 06:56:55.159136  225434 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5959,"bootTime":1764911856,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:56:55.159185  225434 start.go:143] virtualization: kvm guest
	I1205 06:56:55.161755  225434 out.go:179] * [NoKubernetes-385989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:56:55.162913  225434 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:56:55.162964  225434 notify.go:221] Checking for updates...
	I1205 06:56:55.165140  225434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:56:55.166591  225434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:56:55.167676  225434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:56:55.168618  225434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:56:55.170278  225434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:56:55.172108  225434 config.go:182] Loaded profile config "missing-upgrade-044081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:56:55.172316  225434 config.go:182] Loaded profile config "pause-355053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:56:55.172496  225434 config.go:182] Loaded profile config "stopped-upgrade-515128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:56:55.172534  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.172678  225434 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:56:55.202233  225434 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:56:55.202403  225434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:56:55.270438  225434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-05 06:56:55.259673317 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:56:55.270592  225434 docker.go:319] overlay module found
	I1205 06:56:55.272127  225434 out.go:179] * Using the docker driver based on user configuration
	I1205 06:56:55.273192  225434 start.go:309] selected driver: docker
	I1205 06:56:55.273211  225434 start.go:927] validating driver "docker" against <nil>
	I1205 06:56:55.273225  225434 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:56:55.273976  225434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:56:55.339016  225434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-05 06:56:55.327804288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:56:55.339146  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.339220  225434 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:56:55.339445  225434 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:56:55.343441  225434 out.go:179] * Using Docker driver with root privileges
	I1205 06:56:55.344514  225434 cni.go:84] Creating CNI manager for ""
	I1205 06:56:55.344590  225434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:56:55.344605  225434 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:56:55.344649  225434 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 06:56:55.344702  225434 start.go:353] cluster config:
	{Name:NoKubernetes-385989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-385989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:56:55.345911  225434 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-385989
	I1205 06:56:55.347520  225434 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:56:55.348812  225434 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:56:55.349888  225434 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1205 06:56:55.350025  225434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/NoKubernetes-385989/config.json ...
	I1205 06:56:55.350059  225434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/NoKubernetes-385989/config.json: {Name:mkd94856f1134c9f4d9fb1f628c7b01ee6b0df19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:55.350190  225434 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:56:55.380432  225434 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:56:55.380455  225434 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 06:56:55.380474  225434 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:56:55.380505  225434 start.go:360] acquireMachinesLock for NoKubernetes-385989: {Name:mk3f2d7fbe0b75327c4a414bb071365bd5df1b84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:56:55.380578  225434 start.go:364] duration metric: took 52.526µs to acquireMachinesLock for "NoKubernetes-385989"
	I1205 06:56:55.380601  225434 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-385989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-385989 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:56:55.380692  225434 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:56:55.075417  220008 cli_runner.go:164] Run: docker network inspect stopped-upgrade-515128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:56:55.094353  220008 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 06:56:55.098908  220008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:56:55.111999  220008 kubeadm.go:883] updating cluster {Name:stopped-upgrade-515128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-515128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:56:55.112115  220008 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1205 06:56:55.112159  220008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:56:55.198124  220008 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:56:55.198139  220008 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:56:55.198200  220008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:56:55.249207  220008 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:56:55.249224  220008 cache_images.go:84] Images are preloaded, skipping loading
	I1205 06:56:55.249233  220008 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1205 06:56:55.249505  220008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-515128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-515128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:56:55.249614  220008 ssh_runner.go:195] Run: crio config
	I1205 06:56:55.307684  220008 cni.go:84] Creating CNI manager for ""
	I1205 06:56:55.307701  220008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:56:55.307714  220008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 06:56:55.307745  220008 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-515128 NodeName:stopped-upgrade-515128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:56:55.307907  220008 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-515128"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:56:55.307981  220008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1205 06:56:55.319455  220008 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 06:56:55.319536  220008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:56:55.331045  220008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1205 06:56:55.353139  220008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:56:55.379133  220008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1205 06:56:55.407442  220008 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:56:55.411439  220008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:56:55.425674  220008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:56:55.545518  220008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:56:55.572975  220008 certs.go:68] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128 for IP: 192.168.85.2
	I1205 06:56:55.572990  220008 certs.go:194] generating shared ca certs ...
	I1205 06:56:55.573012  220008 certs.go:226] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:55.573186  220008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 06:56:55.573248  220008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 06:56:55.573264  220008 certs.go:256] generating profile certs ...
	I1205 06:56:55.573369  220008 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key
	I1205 06:56:55.573385  220008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt with IP's: []
	I1205 06:56:55.896942  220008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt ...
	I1205 06:56:55.896960  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt: {Name:mkecf43351d85761272b799d82b2c8e7e837e8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:55.897148  220008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key ...
	I1205 06:56:55.897165  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key: {Name:mk03a0b9b7ead1ddd3da3d666a90c53b4920a835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:55.897274  220008 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key.f9d5e9a9
	I1205 06:56:55.897293  220008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt.f9d5e9a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 06:56:56.259862  220008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt.f9d5e9a9 ...
	I1205 06:56:56.259879  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt.f9d5e9a9: {Name:mkbd39c379075cdb1be1833ce1b00f5b260253f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:56.260030  220008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key.f9d5e9a9 ...
	I1205 06:56:56.260043  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key.f9d5e9a9: {Name:mkfdaac2537d9055b1fce44ded0f946cd49c4a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:56.260115  220008 certs.go:381] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt.f9d5e9a9 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt
	I1205 06:56:56.260184  220008 certs.go:385] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key.f9d5e9a9 -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key
	I1205 06:56:56.260231  220008 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.key
	I1205 06:56:56.260241  220008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.crt with IP's: []
	I1205 06:56:56.544211  220008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.crt ...
	I1205 06:56:56.544230  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.crt: {Name:mkaf1557e29901e224764f0e410d30b961df1510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:56.544434  220008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.key ...
	I1205 06:56:56.544455  220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.key: {Name:mk7ec11e8f37921c08c217572de477ed800b2666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:56:56.544668  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 06:56:56.544704  220008 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 06:56:56.544710  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:56:56.544729  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:56:56.544746  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:56:56.544763  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 06:56:56.544809  220008 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 06:56:56.545476  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:56:56.582871  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:56:56.620556  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:56:56.655573  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:56:56.693700  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 06:56:56.725950  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:56:56.759672  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:56:56.800676  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:56:56.834806  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:56:56.864956  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 06:56:56.896763  220008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 06:56:56.926410  220008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:56:56.946163  220008 ssh_runner.go:195] Run: openssl version
	I1205 06:56:56.952496  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I1205 06:56:56.963420  220008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 06:56:56.967556  220008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 06:56:56.967599  220008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 06:56:56.976066  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 06:56:56.986759  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 06:56:56.999111  220008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:56:57.004411  220008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:56:57.004476  220008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:56:57.014452  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 06:56:57.026836  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I1205 06:56:57.039531  220008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 06:56:57.043764  220008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 06:56:57.043814  220008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 06:56:57.052477  220008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I1205 06:56:57.065536  220008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:56:57.070020  220008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:56:57.070080  220008 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-515128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-515128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:56:57.070193  220008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:56:57.070249  220008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:56:57.117397  220008 cri.go:89] found id: ""
	I1205 06:56:57.117464  220008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:56:57.127142  220008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:56:57.136591  220008 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:56:57.136633  220008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:56:57.146956  220008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:56:57.146966  220008 kubeadm.go:157] found existing configuration files:
	
	I1205 06:56:57.147015  220008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:56:57.155801  220008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:56:57.155856  220008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:56:57.165640  220008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:56:57.175251  220008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:56:57.175290  220008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:56:57.184708  220008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:56:57.194226  220008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:56:57.194264  220008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:56:57.203966  220008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:56:57.213118  220008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:56:57.213179  220008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:56:57.222448  220008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:56:57.270433  220008 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1205 06:56:57.270500  220008 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 06:56:57.291187  220008 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:56:57.291240  220008 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 06:56:57.291268  220008 kubeadm.go:310] OS: Linux
	I1205 06:56:57.291304  220008 kubeadm.go:310] CGROUPS_CPU: enabled
	I1205 06:56:57.291389  220008 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1205 06:56:57.291430  220008 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1205 06:56:57.291488  220008 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1205 06:56:57.291548  220008 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1205 06:56:57.291634  220008 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1205 06:56:57.291702  220008 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1205 06:56:57.291761  220008 kubeadm.go:310] CGROUPS_IO: enabled
	I1205 06:56:57.351729  220008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:56:57.351880  220008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:56:57.351992  220008 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:56:57.358664  220008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:56:54.577831  222149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-044081:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (3.430170221s)
	I1205 06:56:54.577857  222149 kic.go:203] duration metric: took 3.430312262s to extract preloaded images to volume ...
	W1205 06:56:54.577941  222149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 06:56:54.577967  222149 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 06:56:54.578011  222149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:56:54.641093  222149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-044081 --name missing-upgrade-044081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-044081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-044081 --network missing-upgrade-044081 --ip 192.168.76.2 --volume missing-upgrade-044081:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1205 06:56:55.198298  222149 cli_runner.go:164] Run: docker container inspect missing-upgrade-044081 --format={{.State.Running}}
	I1205 06:56:55.220639  222149 cli_runner.go:164] Run: docker container inspect missing-upgrade-044081 --format={{.State.Status}}
	I1205 06:56:55.246367  222149 cli_runner.go:164] Run: docker exec missing-upgrade-044081 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:56:55.295983  222149 oci.go:144] the created container "missing-upgrade-044081" has a running status.
	I1205 06:56:55.296023  222149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa...
	I1205 06:56:55.708752  222149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:56:55.736994  222149 cli_runner.go:164] Run: docker container inspect missing-upgrade-044081 --format={{.State.Status}}
	I1205 06:56:55.759544  222149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:56:55.759572  222149 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-044081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:56:55.808739  222149 cli_runner.go:164] Run: docker container inspect missing-upgrade-044081 --format={{.State.Status}}
	I1205 06:56:55.827981  222149 machine.go:93] provisionDockerMachine start ...
	I1205 06:56:55.828071  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:55.848248  222149 main.go:141] libmachine: Using SSH client type: native
	I1205 06:56:55.848563  222149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1205 06:56:55.848574  222149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 06:56:55.983982  222149 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-044081
	
	I1205 06:56:55.984008  222149 ubuntu.go:169] provisioning hostname "missing-upgrade-044081"
	I1205 06:56:55.984074  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:56.006001  222149 main.go:141] libmachine: Using SSH client type: native
	I1205 06:56:56.006168  222149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1205 06:56:56.006175  222149 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-044081 && echo "missing-upgrade-044081" | sudo tee /etc/hostname
	I1205 06:56:56.151612  222149 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-044081
	
	I1205 06:56:56.151682  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:56.171318  222149 main.go:141] libmachine: Using SSH client type: native
	I1205 06:56:56.171569  222149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1205 06:56:56.171595  222149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-044081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-044081/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-044081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:56:56.306766  222149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:56:56.306787  222149 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 06:56:56.306826  222149 ubuntu.go:177] setting up certificates
	I1205 06:56:56.306835  222149 provision.go:84] configureAuth start
	I1205 06:56:56.306889  222149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-044081
	I1205 06:56:56.330155  222149 provision.go:143] copyHostCerts
	I1205 06:56:56.330218  222149 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 06:56:56.330227  222149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 06:56:56.330301  222149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 06:56:56.330435  222149 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 06:56:56.330441  222149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 06:56:56.330482  222149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 06:56:56.330550  222149 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 06:56:56.330555  222149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 06:56:56.330593  222149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 06:56:56.330655  222149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-044081 san=[127.0.0.1 192.168.76.2 localhost minikube missing-upgrade-044081]
	I1205 06:56:56.576259  222149 provision.go:177] copyRemoteCerts
	I1205 06:56:56.576331  222149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:56:56.576380  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:56.608493  222149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa Username:docker}
	I1205 06:56:56.711067  222149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:56:56.743222  222149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 06:56:56.778069  222149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:56:56.816813  222149 provision.go:87] duration metric: took 509.953498ms to configureAuth
	I1205 06:56:56.816838  222149 ubuntu.go:193] setting minikube options for container-runtime
	I1205 06:56:56.817461  222149 config.go:182] Loaded profile config "missing-upgrade-044081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:56:56.817650  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:56.838938  222149 main.go:141] libmachine: Using SSH client type: native
	I1205 06:56:56.839184  222149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1205 06:56:56.839202  222149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:56:57.094274  222149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:56:57.094296  222149 machine.go:96] duration metric: took 1.266301109s to provisionDockerMachine
	I1205 06:56:57.094307  222149 client.go:171] duration metric: took 9.181542338s to LocalClient.Create
	I1205 06:56:57.094376  222149 start.go:167] duration metric: took 9.181615096s to libmachine.API.Create "missing-upgrade-044081"
	I1205 06:56:57.094387  222149 start.go:293] postStartSetup for "missing-upgrade-044081" (driver="docker")
	I1205 06:56:57.094397  222149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:56:57.094465  222149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:56:57.094514  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:57.115605  222149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa Username:docker}
	I1205 06:56:57.211878  222149 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:56:57.215568  222149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:56:57.215605  222149 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 06:56:57.215612  222149 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 06:56:57.215618  222149 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 06:56:57.215627  222149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 06:56:57.215668  222149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 06:56:57.215741  222149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 06:56:57.215826  222149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 06:56:57.225632  222149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 06:56:57.259470  222149 start.go:296] duration metric: took 165.066742ms for postStartSetup
	I1205 06:56:57.259872  222149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-044081
	I1205 06:56:57.281059  222149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/config.json ...
	I1205 06:56:57.281368  222149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:56:57.281419  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:57.301895  222149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa Username:docker}
	I1205 06:56:57.394795  222149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:56:57.400032  222149 start.go:128] duration metric: took 9.489743276s to createHost
	I1205 06:56:57.400047  222149 start.go:83] releasing machines lock for "missing-upgrade-044081", held for 9.489856772s
	I1205 06:56:57.400114  222149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-044081
	I1205 06:56:57.419373  222149 ssh_runner.go:195] Run: cat /version.json
	I1205 06:56:57.419394  222149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:56:57.419420  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:57.419451  222149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-044081
	I1205 06:56:57.441617  222149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa Username:docker}
	I1205 06:56:57.442413  222149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/missing-upgrade-044081/id_rsa Username:docker}
	I1205 06:56:57.533942  222149 ssh_runner.go:195] Run: systemctl --version
	I1205 06:56:57.617115  222149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:56:57.361831  220008 out.go:235]   - Generating certificates and keys ...
	I1205 06:56:57.361921  220008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 06:56:57.362022  220008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.745423553Z" level=info msg="RDT not available in the host system"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.745432706Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746189603Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746212126Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.746227731Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.747027598Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.747044844Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.755685447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.755741488Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.756468855Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.757073252Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.757190906Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842108137Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-m48hg Namespace:kube-system ID:f129ef27e93f1595ba2fbd4c99a3a73d14a0d49ec1a1d490741fc93c8959a1d0 UID:08bbbed9-1fb4-4963-8c64-32ddd6f85a1e NetNS:/var/run/netns/c04145e5-63c8-4255-b5da-a698fdcbc0f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000524368}] Aliases:map[]}"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842376501Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-m48hg for CNI network kindnet (type=ptp)"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842921804Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842944701Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.842995343Z" level=info msg="Create NRI interface"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843105804Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843115543Z" level=info msg="runtime interface created"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843127394Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843135284Z" level=info msg="runtime interface starting up..."
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843143691Z" level=info msg="starting plugins..."
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.843157641Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:56:50 pause-355053 crio[2177]: time="2025-12-05T06:56:50.84353969Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:56:50 pause-355053 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a6c753f009730       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago      Running             coredns                   0                   f129ef27e93f1       coredns-66bc5c9577-m48hg               kube-system
	ef0e38429e2a7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   32 seconds ago      Running             kube-proxy                0                   5d453405fdb1c       kube-proxy-kqmhr                       kube-system
	3c008f56b2623       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   32 seconds ago      Running             kindnet-cni               0                   a5ec9875a958a       kindnet-5nfzr                          kube-system
	e01dd197330ac       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   43 seconds ago      Running             kube-apiserver            0                   af2ec9a58d330       kube-apiserver-pause-355053            kube-system
	c90e9778f2894       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   43 seconds ago      Running             kube-controller-manager   0                   5838826cd2b43       kube-controller-manager-pause-355053   kube-system
	f251d470cd673       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   43 seconds ago      Running             kube-scheduler            0                   a25829e60892e       kube-scheduler-pause-355053            kube-system
	e1f1cbcf13622       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   43 seconds ago      Running             etcd                      0                   37e5ad88a71da       etcd-pause-355053                      kube-system
	
	
	==> coredns [a6c753f00973061f2a04aaf7cc5307f88037c9bd9cea9c943593bf2deae5ed9d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52722 - 41859 "HINFO IN 7818207068985842571.8708193642429815693. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099113008s
	
	
	==> describe nodes <==
	Name:               pause-355053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-355053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=pause-355053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_56_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:56:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-355053
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:56:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:56:52 +0000   Fri, 05 Dec 2025 06:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-355053
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                142985db-c011-4522-89dc-7a9bfef099f7
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-m48hg                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     32s
	  kube-system                 etcd-pause-355053                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-5nfzr                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-pause-355053             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-pause-355053    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-kqmhr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-pause-355053             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node pause-355053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node pause-355053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node pause-355053 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node pause-355053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node pause-355053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node pause-355053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                node-controller  Node pause-355053 event: Registered Node pause-355053 in Controller
	  Normal  NodeReady                22s                kubelet          Node pause-355053 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081455] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024960] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.135465] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 5 06:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.022771] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023920] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +1.023880] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +2.047782] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +4.032580] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[  +8.063178] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[ +16.381345] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	[Dec 5 06:08] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 2d 9a 9a b0 3d 26 47 ae f1 8b 98 08 00
	
	
	==> etcd [e1f1cbcf13622926c9b9df1bdc1f1946f032c304a99f73db3aa2d792f98f94e7] <==
	{"level":"warn","ts":"2025-12-05T06:56:18.171948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.184963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.191080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.200097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.209823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.219610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.230494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.245097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.251844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.267803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.276128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.283017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:56:18.341916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:56:29.923546Z","caller":"traceutil/trace.go:172","msg":"trace[1308554962] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"139.477606ms","start":"2025-12-05T06:56:29.784047Z","end":"2025-12-05T06:56:29.923525Z","steps":["trace[1308554962] 'process raft request'  (duration: 137.384366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:30.899762Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.798311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-05T06:56:30.899943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.485031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-355053\" limit:1 ","response":"range_response_count:1 size:5986"}
	{"level":"info","ts":"2025-12-05T06:56:30.899982Z","caller":"traceutil/trace.go:172","msg":"trace[840456427] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-355053; range_end:; response_count:1; response_revision:416; }","duration":"202.517917ms","start":"2025-12-05T06:56:30.697452Z","end":"2025-12-05T06:56:30.899970Z","steps":["trace[840456427] 'range keys from in-memory index tree'  (duration: 202.295172ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:56:30.899988Z","caller":"traceutil/trace.go:172","msg":"trace[1153837512] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"202.045557ms","start":"2025-12-05T06:56:30.697906Z","end":"2025-12-05T06:56:30.899952Z","steps":["trace[1153837512] 'range keys from in-memory index tree'  (duration: 201.664505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:30.899846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.553269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-355053\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-12-05T06:56:30.900269Z","caller":"traceutil/trace.go:172","msg":"trace[863780581] range","detail":"{range_begin:/registry/minions/pause-355053; range_end:; response_count:1; response_revision:416; }","duration":"227.975306ms","start":"2025-12-05T06:56:30.672264Z","end":"2025-12-05T06:56:30.900240Z","steps":["trace[863780581] 'range keys from in-memory index tree'  (duration: 227.366048ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:56:31.050666Z","caller":"traceutil/trace.go:172","msg":"trace[1085688299] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"140.247011ms","start":"2025-12-05T06:56:30.910399Z","end":"2025-12-05T06:56:31.050646Z","steps":["trace[1085688299] 'process raft request'  (duration: 140.133758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:31.330061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.614268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-355053\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-12-05T06:56:31.330140Z","caller":"traceutil/trace.go:172","msg":"trace[1413122416] range","detail":"{range_begin:/registry/minions/pause-355053; range_end:; response_count:1; response_revision:417; }","duration":"158.700914ms","start":"2025-12-05T06:56:31.171419Z","end":"2025-12-05T06:56:31.330120Z","steps":["trace[1413122416] 'range keys from in-memory index tree'  (duration: 158.44578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:56:42.744959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.333231ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:56:42.745033Z","caller":"traceutil/trace.go:172","msg":"trace[1748903115] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:439; }","duration":"343.42378ms","start":"2025-12-05T06:56:42.401594Z","end":"2025-12-05T06:56:42.745018Z","steps":["trace[1748903115] 'range keys from in-memory index tree'  (duration: 343.232918ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:56:59 up  1:39,  0 user,  load average: 3.42, 1.79, 1.22
	Linux pause-355053 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c008f56b26231ab9f01e1d4637584edad703e68d28681c0ac32f9b909630a52] <==
	I1205 06:56:27.298223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 06:56:27.334564       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1205 06:56:27.334727       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:56:27.334752       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:56:27.334780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:56:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:56:27.636372       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:56:27.636413       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:56:27.636427       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:56:27.636572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 06:56:28.037408       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:56:28.037441       1 metrics.go:72] Registering metrics
	I1205 06:56:28.037494       1 controller.go:711] "Syncing nftables rules"
	I1205 06:56:37.600717       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:37.600784       1 main.go:301] handling current node
	I1205 06:56:47.599348       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:47.599404       1 main.go:301] handling current node
	I1205 06:56:57.600434       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 06:56:57.600484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e01dd197330ac7e05c3d4538a85363df4fc90854af178e996b184e8e5380031e] <==
	I1205 06:56:18.959655       1 policy_source.go:240] refreshing policies
	E1205 06:56:18.992991       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1205 06:56:19.040226       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 06:56:19.043491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:19.043639       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1205 06:56:19.061234       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:19.061917       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 06:56:19.162002       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 06:56:19.840237       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 06:56:19.845723       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 06:56:19.845744       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 06:56:20.416055       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:56:20.450434       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:56:20.541729       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 06:56:20.548395       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1205 06:56:20.549419       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 06:56:20.554078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:56:20.886007       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:56:21.765296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:56:21.779355       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 06:56:21.789381       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:56:26.688796       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1205 06:56:26.843984       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:26.852376       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:56:26.911801       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c90e9778f2894bc931fddacdbc550c61db02951ba5cce6883f1354b609119233] <==
	I1205 06:56:25.884468       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 06:56:25.885036       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:56:25.885165       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 06:56:25.885426       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:56:25.885462       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 06:56:25.885552       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 06:56:25.885746       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:56:25.886003       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 06:56:25.885180       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:56:25.886026       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 06:56:25.886580       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 06:56:25.888500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:56:25.889066       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:56:25.889560       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:56:25.889626       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:56:25.889859       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:56:25.890124       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:56:25.890137       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:56:25.891678       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:56:25.895483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:56:25.898822       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-355053" podCIDRs=["10.244.0.0/24"]
	I1205 06:56:25.904088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:56:25.904985       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1205 06:56:25.908881       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:56:40.835087       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ef0e38429e2a72947841ee46cd4e3c082cc8d45a2fd25fa53b8b873b3d945b73] <==
	I1205 06:56:27.156885       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:56:27.218272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:56:27.318395       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:56:27.318449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1205 06:56:27.318588       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:56:27.341659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:56:27.341823       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:56:27.349540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:56:27.349955       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:56:27.350018       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:56:27.351524       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:56:27.351544       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:56:27.351632       1 config.go:200] "Starting service config controller"
	I1205 06:56:27.351644       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:56:27.351774       1 config.go:309] "Starting node config controller"
	I1205 06:56:27.351789       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:56:27.351797       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:56:27.351814       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:56:27.351828       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:56:27.451994       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:56:27.452023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:56:27.452019       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f251d470cd6736b4a6277b54f6a84799f3538e97b81705f7032e91c72f54a815] <==
	E1205 06:56:18.947719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:56:18.947723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:56:18.947837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:56:18.947890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:56:18.947908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:56:18.947997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:56:18.947995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:56:18.948090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:56:18.948111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:56:18.950402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:56:18.950868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:56:18.951367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:56:18.951368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:56:18.951486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:56:19.807318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:56:19.828586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:56:19.850067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:56:19.866981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 06:56:19.911088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:56:19.934248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:56:20.000624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:56:20.046487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:56:20.088793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:56:20.126348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1205 06:56:22.931991       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:56:46 pause-355053 kubelet[1328]: E1205 06:56:46.722447    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:46 pause-355053 kubelet[1328]: E1205 06:56:46.722460    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: W1205 06:56:48.726536    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726634    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726675    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: E1205 06:56:48.726688    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:48 pause-355053 kubelet[1328]: W1205 06:56:48.827783    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.003741    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.227447    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661741    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661811    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661828    1328 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.661839    1328 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: W1205 06:56:49.706550    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727858    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727913    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:49 pause-355053 kubelet[1328]: E1205 06:56:49.727927    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:50 pause-355053 kubelet[1328]: W1205 06:56:50.466976    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728674    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728757    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:50 pause-355053 kubelet[1328]: E1205 06:56:50.728776    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 05 06:56:54 pause-355053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 06:56:54 pause-355053 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 06:56:54 pause-355053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:54 pause-355053 systemd[1]: kubelet.service: Consumed 1.317s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355053 -n pause-355053
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355053 -n pause-355053: exit status 2 (357.034712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-355053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.393685ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:05:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-874709 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-874709 describe deploy/metrics-server -n kube-system: exit status 1 (74.107608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-874709 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-874709
helpers_test.go:243: (dbg) docker inspect old-k8s-version-874709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	        "Created": "2025-12-05T07:04:05.274488478Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:04:05.328984878Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hosts",
	        "LogPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5-json.log",
	        "Name": "/old-k8s-version-874709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-874709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-874709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	                "LowerDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-874709",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-874709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-874709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5ff13c35b3c95e279e7353c639925e304108ee0d5a23c5ef6cda78e7ddafe3a9",
	            "SandboxKey": "/var/run/docker/netns/5ff13c35b3c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-874709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b675820a4e14e6d815ef976a01c5649e140b5ac4be761da7497f0b550155e220",
	                    "EndpointID": "02fd0c49f33c07cdc7af2fc26194a1f34f1f67a01760a3fe03e1d379816815e5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "a6:38:81:ad:30:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-874709",
	                        "e58ec92f2b17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25: (1.069815616s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-397607 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo docker system info                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cri-dockerd --version                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                        │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                          │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:04:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:04:53.601860  355650 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:04:53.602126  355650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:04:53.602137  355650 out.go:374] Setting ErrFile to fd 2...
	I1205 07:04:53.602143  355650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:04:53.602386  355650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:04:53.602862  355650 out.go:368] Setting JSON to false
	I1205 07:04:53.603991  355650 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6438,"bootTime":1764911856,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:04:53.604048  355650 start.go:143] virtualization: kvm guest
	I1205 07:04:53.610493  355650 out.go:179] * [default-k8s-diff-port-172186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:04:53.611805  355650 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:04:53.611813  355650 notify.go:221] Checking for updates...
	I1205 07:04:53.614004  355650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:04:53.615263  355650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:04:53.616369  355650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:04:53.617458  355650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:04:53.618544  355650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:04:53.619983  355650 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:04:53.620094  355650 config.go:182] Loaded profile config "no-preload-008839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:04:53.620188  355650 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:04:53.620298  355650 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:04:53.643692  355650 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:04:53.643847  355650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:04:53.706470  355650 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-05 07:04:53.696399615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:04:53.706610  355650 docker.go:319] overlay module found
	I1205 07:04:53.707874  355650 out.go:179] * Using the docker driver based on user configuration
	I1205 07:04:53.709064  355650 start.go:309] selected driver: docker
	I1205 07:04:53.709078  355650 start.go:927] validating driver "docker" against <nil>
	I1205 07:04:53.709089  355650 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:04:53.709720  355650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:04:53.769685  355650 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-05 07:04:53.759734446 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:04:53.769838  355650 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:04:53.770022  355650 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:04:53.771918  355650 out.go:179] * Using Docker driver with root privileges
	I1205 07:04:53.773048  355650 cni.go:84] Creating CNI manager for ""
	I1205 07:04:53.773123  355650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:04:53.773138  355650 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:04:53.773201  355650 start.go:353] cluster config:
	{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:04:53.774388  355650 out.go:179] * Starting "default-k8s-diff-port-172186" primary control-plane node in "default-k8s-diff-port-172186" cluster
	I1205 07:04:53.775584  355650 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:04:53.776735  355650 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:04:53.777706  355650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:04:53.777738  355650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:04:53.777749  355650 cache.go:65] Caching tarball of preloaded images
	I1205 07:04:53.777805  355650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:04:53.777837  355650 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:04:53.777853  355650 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:04:53.777973  355650 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:04:53.778016  355650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json: {Name:mkb362b8ea8e931f24cffa2e7edb118a0c734c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:53.798568  355650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:04:53.798584  355650 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:04:53.798598  355650 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:04:53.798626  355650 start.go:360] acquireMachinesLock for default-k8s-diff-port-172186: {Name:mkc7b70f4fd2c66eec9f181ab0dc691b16be91dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:04:53.798706  355650 start.go:364] duration metric: took 65.534µs to acquireMachinesLock for "default-k8s-diff-port-172186"
	I1205 07:04:53.798731  355650 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:04:53.798796  355650 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:04:51.931891  350525 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:04:51.931920  350525 ubuntu.go:182] provisioning hostname "embed-certs-770390"
	I1205 07:04:51.931992  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:51.951660  350525 main.go:143] libmachine: Using SSH client type: native
	I1205 07:04:51.951972  350525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:04:51.951994  350525 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-770390 && echo "embed-certs-770390" | sudo tee /etc/hostname
	I1205 07:04:52.180972  350525 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:04:52.181063  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:52.202546  350525 main.go:143] libmachine: Using SSH client type: native
	I1205 07:04:52.202805  350525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:04:52.202829  350525 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-770390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-770390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-770390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:04:52.341434  350525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:04:52.341470  350525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:04:52.341491  350525 ubuntu.go:190] setting up certificates
	I1205 07:04:52.341504  350525 provision.go:84] configureAuth start
	I1205 07:04:52.341577  350525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:04:52.362182  350525 provision.go:143] copyHostCerts
	I1205 07:04:52.362243  350525 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:04:52.362258  350525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:04:52.362345  350525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:04:52.362469  350525 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:04:52.362481  350525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:04:52.362524  350525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:04:52.362619  350525 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:04:52.362630  350525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:04:52.362668  350525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:04:52.362755  350525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.embed-certs-770390 san=[127.0.0.1 192.168.76.2 embed-certs-770390 localhost minikube]
	I1205 07:04:52.473559  350525 provision.go:177] copyRemoteCerts
	I1205 07:04:52.473617  350525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:04:52.473665  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:52.493167  350525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:04:52.590664  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:04:52.701564  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 07:04:52.720769  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:04:52.739196  350525 provision.go:87] duration metric: took 397.66546ms to configureAuth
	I1205 07:04:52.739230  350525 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:04:52.739436  350525 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:04:52.739567  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:52.759547  350525 main.go:143] libmachine: Using SSH client type: native
	I1205 07:04:52.759778  350525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:04:52.759801  350525 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:04:53.124351  350525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:04:53.124374  350525 machine.go:97] duration metric: took 4.369113377s to provisionDockerMachine
	I1205 07:04:53.124385  350525 client.go:176] duration metric: took 11.051229207s to LocalClient.Create
	I1205 07:04:53.124403  350525 start.go:167] duration metric: took 11.051297682s to libmachine.API.Create "embed-certs-770390"
	I1205 07:04:53.124411  350525 start.go:293] postStartSetup for "embed-certs-770390" (driver="docker")
	I1205 07:04:53.124423  350525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:04:53.124486  350525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:04:53.124532  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:53.145434  350525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:04:53.253211  350525 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:04:53.256912  350525 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:04:53.256942  350525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:04:53.256960  350525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:04:53.257019  350525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:04:53.257103  350525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:04:53.257210  350525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:04:53.265505  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:04:53.289273  350525 start.go:296] duration metric: took 164.848008ms for postStartSetup
	I1205 07:04:53.289690  350525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:04:53.316486  350525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:04:53.316767  350525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:04:53.316817  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:53.338685  350525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:04:53.438907  350525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:04:53.443643  350525 start.go:128] duration metric: took 11.373635828s to createHost
	I1205 07:04:53.443666  350525 start.go:83] releasing machines lock for "embed-certs-770390", held for 11.373773694s
	I1205 07:04:53.443741  350525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:04:53.461977  350525 ssh_runner.go:195] Run: cat /version.json
	I1205 07:04:53.462026  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:53.462063  350525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:04:53.462140  350525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:04:53.481454  350525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:04:53.482568  350525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:04:53.584663  350525 ssh_runner.go:195] Run: systemctl --version
	I1205 07:04:53.645693  350525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:04:53.687194  350525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:04:53.692391  350525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:04:53.692461  350525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:04:53.720190  350525 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:04:53.720213  350525 start.go:496] detecting cgroup driver to use...
	I1205 07:04:53.720242  350525 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:04:53.720291  350525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:04:53.740062  350525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:04:53.754820  350525 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:04:53.754885  350525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:04:53.772685  350525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:04:53.790171  350525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:04:53.877659  350525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:04:53.983394  350525 docker.go:234] disabling docker service ...
	I1205 07:04:53.983496  350525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:04:54.003496  350525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:04:54.019350  350525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:04:54.135590  350525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:04:54.264092  350525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:04:54.277025  350525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:04:54.299310  350525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:04:54.299389  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.312474  350525 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:04:54.312622  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.321860  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.330796  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.341701  350525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:04:54.352912  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.362810  350525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.378002  350525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:04:54.402832  350525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:04:54.412825  350525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:04:54.426187  350525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:04:54.522229  350525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:04:54.706313  350525 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:04:54.706415  350525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:04:54.711338  350525 start.go:564] Will wait 60s for crictl version
	I1205 07:04:54.711400  350525 ssh_runner.go:195] Run: which crictl
	I1205 07:04:54.716007  350525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:04:54.743732  350525 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:04:54.743825  350525 ssh_runner.go:195] Run: crio --version
	I1205 07:04:54.778964  350525 ssh_runner.go:195] Run: crio --version
	I1205 07:04:54.819863  350525 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:04:53.227635  343486 out.go:252]   - Booting up control plane ...
	I1205 07:04:53.227797  343486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:04:53.227967  343486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:04:53.229002  343486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:04:53.246783  343486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:04:53.246918  343486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:04:53.253917  343486 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:04:53.254317  343486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:04:53.254405  343486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:04:53.388207  343486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:04:53.388401  343486 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:04:53.888822  343486 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.670249ms
	I1205 07:04:53.892792  343486 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:04:53.892946  343486 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 07:04:53.893103  343486 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:04:53.893211  343486 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:04:54.398988  343486 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.901689ms
	I1205 07:04:55.559282  343486 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.666540417s
	I1205 07:04:54.821212  350525 cli_runner.go:164] Run: docker network inspect embed-certs-770390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:04:54.842570  350525 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:04:54.847444  350525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:04:54.859795  350525 kubeadm.go:884] updating cluster {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:04:54.859951  350525 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:04:54.860059  350525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:04:54.899986  350525 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:04:54.900009  350525 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:04:54.900062  350525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:04:54.931547  350525 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:04:54.931570  350525 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:04:54.931580  350525 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1205 07:04:54.931685  350525 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-770390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:04:54.931787  350525 ssh_runner.go:195] Run: crio config
	I1205 07:04:54.993063  350525 cni.go:84] Creating CNI manager for ""
	I1205 07:04:54.993094  350525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:04:54.993117  350525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:04:54.993145  350525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-770390 NodeName:embed-certs-770390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:04:54.993302  350525 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-770390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:04:54.993397  350525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:04:55.003037  350525 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:04:55.003105  350525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:04:55.013378  350525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1205 07:04:55.030805  350525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:04:55.048241  350525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1205 07:04:55.060945  350525 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:04:55.065090  350525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:04:55.076492  350525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:04:55.189639  350525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:04:55.216231  350525 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390 for IP: 192.168.76.2
	I1205 07:04:55.216253  350525 certs.go:195] generating shared ca certs ...
	I1205 07:04:55.216273  350525 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.216465  350525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:04:55.216520  350525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:04:55.216534  350525 certs.go:257] generating profile certs ...
	I1205 07:04:55.216604  350525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key
	I1205 07:04:55.216623  350525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.crt with IP's: []
	I1205 07:04:55.260746  350525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.crt ...
	I1205 07:04:55.260776  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.crt: {Name:mk785122ea752c1aa7ee376792ffffc3a1199966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.260919  350525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key ...
	I1205 07:04:55.260931  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key: {Name:mka22cac0582c43dc8ae6ca6a70906f3a233eea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.260998  350525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e
	I1205 07:04:55.261013  350525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt.46ffd30e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:04:55.289707  350525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt.46ffd30e ...
	I1205 07:04:55.289731  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt.46ffd30e: {Name:mk1d20097b7e7eb9b831adf7585762832ffc7c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.289894  350525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e ...
	I1205 07:04:55.289909  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e: {Name:mk971976ad248eee61bb6bd232d3f198ab14ffb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.290014  350525 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt.46ffd30e -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt
	I1205 07:04:55.290105  350525 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key
	I1205 07:04:55.290168  350525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key
	I1205 07:04:55.290186  350525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt with IP's: []
	I1205 07:04:55.444129  350525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt ...
	I1205 07:04:55.444214  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt: {Name:mk41da9c71c56a11eacbf6384f4d7bbd0b72fb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.444436  350525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key ...
	I1205 07:04:55.444458  350525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key: {Name:mk77d33a76518b1b326ee11526fc29d8310ff2dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:04:55.444735  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:04:55.444785  350525 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:04:55.444796  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:04:55.444831  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:04:55.444864  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:04:55.444893  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:04:55.444955  350525 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:04:55.445736  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:04:55.466667  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:04:55.485169  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:04:55.509651  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:04:55.542853  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 07:04:55.565252  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:04:55.582591  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:04:55.605271  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:04:55.642772  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:04:55.668876  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:04:55.691015  350525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:04:55.713004  350525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:04:55.725664  350525 ssh_runner.go:195] Run: openssl version
	I1205 07:04:55.732499  350525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:04:55.740135  350525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:04:55.747467  350525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:04:55.751440  350525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:04:55.751488  350525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:04:55.798501  350525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:04:55.808471  350525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:04:55.816200  350525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:04:55.823553  350525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:04:55.831215  350525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:04:55.835228  350525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:04:55.835284  350525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:04:55.870189  350525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:04:55.878464  350525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:04:55.888440  350525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:04:55.897344  350525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:04:55.906077  350525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:04:55.910169  350525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:04:55.910231  350525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:04:55.949031  350525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:04:55.956786  350525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0
	I1205 07:04:55.964047  350525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:04:55.967519  350525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:04:55.967562  350525 kubeadm.go:401] StartCluster: {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:04:55.967620  350525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:04:55.967657  350525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:04:55.998271  350525 cri.go:89] found id: ""
	I1205 07:04:55.998363  350525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:04:56.006563  350525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:04:56.014280  350525 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:04:56.014359  350525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:04:56.021872  350525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:04:56.021888  350525 kubeadm.go:158] found existing configuration files:
	
	I1205 07:04:56.021921  350525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:04:56.029165  350525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:04:56.029204  350525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:04:56.036356  350525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:04:56.044610  350525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:04:56.044661  350525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:04:56.051814  350525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:04:56.059033  350525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:04:56.059077  350525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:04:56.066097  350525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:04:56.073629  350525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:04:56.073679  350525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:04:56.080600  350525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:04:56.120555  350525 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:04:56.120647  350525 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:04:56.141246  350525 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:04:56.141371  350525 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 07:04:56.141419  350525 kubeadm.go:319] OS: Linux
	I1205 07:04:56.141495  350525 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:04:56.141537  350525 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:04:56.141590  350525 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:04:56.141659  350525 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:04:56.141732  350525 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:04:56.141833  350525 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:04:56.141899  350525 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:04:56.141978  350525 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 07:04:56.203900  350525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:04:56.204063  350525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:04:56.204212  350525 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:04:56.211468  350525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:04:56.218969  350525 out.go:252]   - Generating certificates and keys ...
	I1205 07:04:56.219057  350525 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:04:56.219145  350525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:04:56.433850  350525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:04:56.757503  350525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:04:53.801105  355650 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:04:53.801413  355650 start.go:159] libmachine.API.Create for "default-k8s-diff-port-172186" (driver="docker")
	I1205 07:04:53.801488  355650 client.go:173] LocalClient.Create starting
	I1205 07:04:53.801576  355650 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 07:04:53.801621  355650 main.go:143] libmachine: Decoding PEM data...
	I1205 07:04:53.801652  355650 main.go:143] libmachine: Parsing certificate...
	I1205 07:04:53.801738  355650 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 07:04:53.801771  355650 main.go:143] libmachine: Decoding PEM data...
	I1205 07:04:53.801792  355650 main.go:143] libmachine: Parsing certificate...
	I1205 07:04:53.802230  355650 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:04:53.825403  355650 cli_runner.go:211] docker network inspect default-k8s-diff-port-172186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:04:53.825492  355650 network_create.go:284] running [docker network inspect default-k8s-diff-port-172186] to gather additional debugging logs...
	I1205 07:04:53.825511  355650 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172186
	W1205 07:04:53.842059  355650 cli_runner.go:211] docker network inspect default-k8s-diff-port-172186 returned with exit code 1
	I1205 07:04:53.842083  355650 network_create.go:287] error running [docker network inspect default-k8s-diff-port-172186]: docker network inspect default-k8s-diff-port-172186: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-172186 not found
	I1205 07:04:53.842095  355650 network_create.go:289] output of [docker network inspect default-k8s-diff-port-172186]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-172186 not found
	
	** /stderr **
	I1205 07:04:53.842197  355650 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:04:53.860989  355650 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d57cb024a629 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ab:20:17:d9:1a} reservation:<nil>}
	I1205 07:04:53.861975  355650 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-29ce45f1f3fd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:f2:e1:5a:fb:fd} reservation:<nil>}
	I1205 07:04:53.863008  355650 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-18be16a82b81 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:25:6c:b3:f6:c6} reservation:<nil>}
	I1205 07:04:53.863907  355650 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-931902d22986 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:1a:d5:72:cd:51} reservation:<nil>}
	I1205 07:04:53.864702  355650 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b424bb5358c0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:4c:79:ba:46:30} reservation:<nil>}
	I1205 07:04:53.865684  355650 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86500}
	I1205 07:04:53.865705  355650 network_create.go:124] attempt to create docker network default-k8s-diff-port-172186 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1205 07:04:53.865746  355650 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-172186 default-k8s-diff-port-172186
	I1205 07:04:53.919408  355650 network_create.go:108] docker network default-k8s-diff-port-172186 192.168.94.0/24 created
	I1205 07:04:53.919442  355650 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-172186" container
	I1205 07:04:53.919527  355650 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:04:53.941100  355650 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-172186 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-172186 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:04:53.959563  355650 oci.go:103] Successfully created a docker volume default-k8s-diff-port-172186
	I1205 07:04:53.959638  355650 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-172186-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-172186 --entrypoint /usr/bin/test -v default-k8s-diff-port-172186:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:04:54.448094  355650 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-172186
	I1205 07:04:54.448250  355650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:04:54.448270  355650 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:04:54.448356  355650 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-172186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:04:58.894775  343486 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001742895s
	I1205 07:04:58.915481  343486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:04:58.926658  343486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:04:58.936249  343486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:04:58.936542  343486 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-008839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:04:58.945114  343486 kubeadm.go:319] [bootstrap-token] Using token: gc0o4i.7s6t2kgiol5psxgk
	I1205 07:04:58.946312  343486 out.go:252]   - Configuring RBAC rules ...
	I1205 07:04:58.946505  343486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 07:04:58.949801  343486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 07:04:58.955351  343486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 07:04:58.958073  343486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 07:04:58.960440  343486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 07:04:58.962915  343486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 07:04:59.306431  343486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 07:04:59.724785  343486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 07:05:00.302546  343486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 07:05:00.303831  343486 kubeadm.go:319] 
	I1205 07:05:00.303924  343486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 07:05:00.303932  343486 kubeadm.go:319] 
	I1205 07:05:00.304021  343486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 07:05:00.304027  343486 kubeadm.go:319] 
	I1205 07:05:00.304051  343486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 07:05:00.304147  343486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 07:05:00.304222  343486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 07:05:00.304247  343486 kubeadm.go:319] 
	I1205 07:05:00.304336  343486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 07:05:00.304349  343486 kubeadm.go:319] 
	I1205 07:05:00.304415  343486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 07:05:00.304424  343486 kubeadm.go:319] 
	I1205 07:05:00.304494  343486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 07:05:00.304602  343486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 07:05:00.304701  343486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 07:05:00.304716  343486 kubeadm.go:319] 
	I1205 07:05:00.304842  343486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 07:05:00.304948  343486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 07:05:00.304962  343486 kubeadm.go:319] 
	I1205 07:05:00.305080  343486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gc0o4i.7s6t2kgiol5psxgk \
	I1205 07:05:00.305220  343486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d \
	I1205 07:05:00.305247  343486 kubeadm.go:319] 	--control-plane 
	I1205 07:05:00.305252  343486 kubeadm.go:319] 
	I1205 07:05:00.305372  343486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 07:05:00.305387  343486 kubeadm.go:319] 
	I1205 07:05:00.305493  343486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gc0o4i.7s6t2kgiol5psxgk \
	I1205 07:05:00.305625  343486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d 
	I1205 07:05:00.309024  343486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1205 07:05:00.309160  343486 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:05:00.309188  343486 cni.go:84] Creating CNI manager for ""
	I1205 07:05:00.309197  343486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:05:00.311822  343486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 07:05:00.312882  343486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 07:05:00.318438  343486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1205 07:05:00.318455  343486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 07:05:00.333598  343486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 07:05:00.576076  343486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 07:05:00.576161  343486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:00.576175  343486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-008839 minikube.k8s.io/updated_at=2025_12_05T07_05_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=no-preload-008839 minikube.k8s.io/primary=true
	I1205 07:05:00.588941  343486 ops.go:34] apiserver oom_adj: -16
	I1205 07:05:00.694395  343486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 05 07:04:49 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:49.695013082Z" level=info msg="Starting container: 99b50e1e731a5d312a4c161f4f9fb90261d4f2623f9c0cada6081caad68f3465" id=55375a32-5780-462f-a061-af8ba6622961 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:04:49 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:49.697246088Z" level=info msg="Started container" PID=2163 containerID=99b50e1e731a5d312a4c161f4f9fb90261d4f2623f9c0cada6081caad68f3465 description=kube-system/coredns-5dd5756b68-srvvk/coredns id=55375a32-5780-462f-a061-af8ba6622961 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c12a55fbd47dbeac6f563b5e9b80511116075f6eec5492f8824d37a08996268b
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.488438816Z" level=info msg="Running pod sandbox: default/busybox/POD" id=06e4176b-0891-4fb1-aa39-5049bc59f976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.488525354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.700509337Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f87de677736c4f085af68b93100b2dd9d1281a785ba4ddb8355aaf7da04dc754 UID:5446a9ce-ce83-4e1d-9425-c44cc40a4d5c NetNS:/var/run/netns/fe6c9621-5830-495f-bf6e-8cc18a0e118a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d14730}] Aliases:map[]}"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.70054791Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.710638744Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f87de677736c4f085af68b93100b2dd9d1281a785ba4ddb8355aaf7da04dc754 UID:5446a9ce-ce83-4e1d-9425-c44cc40a4d5c NetNS:/var/run/netns/fe6c9621-5830-495f-bf6e-8cc18a0e118a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d14730}] Aliases:map[]}"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.710792975Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.71150714Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.712296651Z" level=info msg="Ran pod sandbox f87de677736c4f085af68b93100b2dd9d1281a785ba4ddb8355aaf7da04dc754 with infra container: default/busybox/POD" id=06e4176b-0891-4fb1-aa39-5049bc59f976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.713555272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6bd1996d-6a88-4658-920a-9e9231202f96 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.713685436Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6bd1996d-6a88-4658-920a-9e9231202f96 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.713729317Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6bd1996d-6a88-4658-920a-9e9231202f96 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.714277973Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c023f339-94f3-4c16-a626-c5a90035f0d4 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:04:52 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:52.716035657Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.383196559Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c023f339-94f3-4c16-a626-c5a90035f0d4 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.384087821Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd2911c6-be1c-4d03-9913-dc03501cfbb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.385696689Z" level=info msg="Creating container: default/busybox/busybox" id=022eae50-a7ba-4b2a-99a2-87dba4ac5e08 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.385823044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.390545661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.391140039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.411185863Z" level=info msg="Created container 1335bdda176f39dd296ffd5e643499a430fd93bea2b82a2af9a370326d967331: default/busybox/busybox" id=022eae50-a7ba-4b2a-99a2-87dba4ac5e08 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.411835525Z" level=info msg="Starting container: 1335bdda176f39dd296ffd5e643499a430fd93bea2b82a2af9a370326d967331" id=a2e3f8b2-6bc5-45f1-a4c9-cd6114c017da name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:04:53 old-k8s-version-874709 crio[772]: time="2025-12-05T07:04:53.41401797Z" level=info msg="Started container" PID=2227 containerID=1335bdda176f39dd296ffd5e643499a430fd93bea2b82a2af9a370326d967331 description=default/busybox/busybox id=a2e3f8b2-6bc5-45f1-a4c9-cd6114c017da name=/runtime.v1.RuntimeService/StartContainer sandboxID=f87de677736c4f085af68b93100b2dd9d1281a785ba4ddb8355aaf7da04dc754
	Dec 05 07:05:00 old-k8s-version-874709 crio[772]: time="2025-12-05T07:05:00.221477652Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1335bdda176f3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f87de677736c4       busybox                                          default
	99b50e1e731a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   c12a55fbd47db       coredns-5dd5756b68-srvvk                         kube-system
	9c33ed3daa9a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   f0ab81bc1146b       storage-provisioner                              kube-system
	29a42a078d693       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   17df55c17bfcd       kindnet-f9lmb                                    kube-system
	331fe76c8f1aa       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   6c0b5b7d9fbbf       kube-proxy-98jls                                 kube-system
	65d2971f5bd0a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   b9277fbf188dc       etcd-old-k8s-version-874709                      kube-system
	e46107236d4ef       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   da8528b309ca1       kube-apiserver-old-k8s-version-874709            kube-system
	3e4212a588c71       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   f20d09077c5eb       kube-scheduler-old-k8s-version-874709            kube-system
	c097d6b12da73       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   0fa39617a9010       kube-controller-manager-old-k8s-version-874709   kube-system
	
	
	==> coredns [99b50e1e731a5d312a4c161f4f9fb90261d4f2623f9c0cada6081caad68f3465] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44331 - 41102 "HINFO IN 2192530356973728632.209842972570945858. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.449379948s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-874709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-874709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=old-k8s-version-874709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_04_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-874709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:04:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:04:52 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:04:52 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:04:52 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:04:52 +0000   Fri, 05 Dec 2025 07:04:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-874709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                5af588f9-e276-46d0-bc7e-d873d5f0f898
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-srvvk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-874709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-f9lmb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-874709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-874709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-98jls                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-874709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-874709 event: Registered Node old-k8s-version-874709 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-874709 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [65d2971f5bd0a3630856478f5da1cc3b6efd7ad8a4d7175b550987b00351d2f1] <==
	{"level":"info","ts":"2025-12-05T07:04:17.270795Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-05T07:04:17.270883Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-05T07:04:17.650383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-05T07:04:17.650425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-05T07:04:17.650451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-05T07:04:17.650465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-05T07:04:17.650471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-05T07:04:17.650479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-05T07:04:17.650497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-05T07:04:17.651265Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-874709 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-05T07:04:17.651292Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:04:17.651521Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:04:17.651693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:04:17.651733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-05T07:04:17.652032Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-05T07:04:17.652309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:04:17.652504Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:04:17.652585Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:04:17.652589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-05T07:04:17.65386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-05T07:04:52.350414Z","caller":"traceutil/trace.go:171","msg":"trace[706879508] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"157.411157ms","start":"2025-12-05T07:04:52.192971Z","end":"2025-12-05T07:04:52.350382Z","steps":["trace[706879508] 'process raft request'  (duration: 134.099371ms)","trace[706879508] 'compare'  (duration: 23.09985ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T07:04:52.47448Z","caller":"traceutil/trace.go:171","msg":"trace[73964557] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:466; }","duration":"104.164409ms","start":"2025-12-05T07:04:52.370289Z","end":"2025-12-05T07:04:52.474454Z","steps":["trace[73964557] 'read index received'  (duration: 103.165867ms)","trace[73964557] 'applied index is now lower than readState.Index'  (duration: 997.764µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T07:04:52.474527Z","caller":"traceutil/trace.go:171","msg":"trace[1970849501] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"119.742138ms","start":"2025-12-05T07:04:52.354764Z","end":"2025-12-05T07:04:52.474506Z","steps":["trace[1970849501] 'process raft request'  (duration: 118.680331ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T07:04:52.474611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.295665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-05T07:04:52.474665Z","caller":"traceutil/trace.go:171","msg":"trace[1725014895] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:448; }","duration":"104.389149ms","start":"2025-12-05T07:04:52.370264Z","end":"2025-12-05T07:04:52.474653Z","steps":["trace[1725014895] 'agreement among raft nodes before linearized reading'  (duration: 104.269328ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:05:01 up  1:47,  0 user,  load average: 4.65, 3.21, 2.09
	Linux old-k8s-version-874709 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29a42a078d693ab3fd1e99a07691051fa20db681b12ef515ba79d521f7d55e1d] <==
	I1205 07:04:36.958372       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:04:37.047140       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1205 07:04:37.047416       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:04:37.047450       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:04:37.047490       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:04:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:04:37.346219       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:04:37.346251       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:04:37.346262       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:04:37.346482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:04:37.646496       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:04:37.646523       1 metrics.go:72] Registering metrics
	I1205 07:04:37.646608       1 controller.go:711] "Syncing nftables rules"
	I1205 07:04:47.266407       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:04:47.266469       1 main.go:301] handling current node
	I1205 07:04:57.259282       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:04:57.259373       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e46107236d4ef58cc41bac48afecfb29bf90ed7f1c8a26196b9a900217038d84] <==
	I1205 07:04:18.814661       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 07:04:18.814679       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 07:04:18.818156       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 07:04:18.818864       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 07:04:18.818941       1 aggregator.go:166] initial CRD sync complete...
	I1205 07:04:18.818970       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 07:04:18.818994       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:04:18.819019       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:04:18.838526       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 07:04:18.847902       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:04:19.718657       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 07:04:19.722201       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 07:04:19.722215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:04:20.121149       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:04:20.161745       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:04:20.221564       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 07:04:20.226886       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1205 07:04:20.227949       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 07:04:20.231964       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:04:20.753654       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 07:04:21.798890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 07:04:21.809782       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 07:04:21.818559       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 07:04:34.362874       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1205 07:04:34.461875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c097d6b12da730088a366c9a63b1a31b8a8fc2e8768d95b3cac7d7963bbd7a38] <==
	I1205 07:04:33.753719       1 shared_informer.go:318] Caches are synced for cronjob
	I1205 07:04:33.809358       1 shared_informer.go:318] Caches are synced for resource quota
	I1205 07:04:34.124616       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:04:34.182735       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:04:34.182760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1205 07:04:34.366843       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1205 07:04:34.480262       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-98jls"
	I1205 07:04:34.480803       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f9lmb"
	I1205 07:04:34.616336       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-srvvk"
	I1205 07:04:34.624924       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lcn7g"
	I1205 07:04:34.632100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="265.598809ms"
	I1205 07:04:34.641870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.712145ms"
	I1205 07:04:34.642193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.726µs"
	I1205 07:04:34.648411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.152µs"
	I1205 07:04:34.796001       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1205 07:04:34.814527       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lcn7g"
	I1205 07:04:34.820721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.35058ms"
	I1205 07:04:34.826044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.276594ms"
	I1205 07:04:34.826182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.55µs"
	I1205 07:04:47.843169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.628µs"
	I1205 07:04:47.859535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.278µs"
	I1205 07:04:48.720648       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1205 07:04:49.974819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.462µs"
	I1205 07:04:49.994493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.508701ms"
	I1205 07:04:49.994655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.582µs"
	
	
	==> kube-proxy [331fe76c8f1aac8054d208152b7db76b87b58d213f86285f93d6c8ce9d2b858e] <==
	I1205 07:04:34.893156       1 server_others.go:69] "Using iptables proxy"
	I1205 07:04:34.903296       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1205 07:04:34.930718       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:04:34.934124       1 server_others.go:152] "Using iptables Proxier"
	I1205 07:04:34.934167       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 07:04:34.934176       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 07:04:34.934206       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 07:04:34.934511       1 server.go:846] "Version info" version="v1.28.0"
	I1205 07:04:34.934528       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:04:34.935138       1 config.go:188] "Starting service config controller"
	I1205 07:04:34.935213       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 07:04:34.935168       1 config.go:97] "Starting endpoint slice config controller"
	I1205 07:04:34.935400       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 07:04:34.935485       1 config.go:315] "Starting node config controller"
	I1205 07:04:34.935553       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 07:04:35.035897       1 shared_informer.go:318] Caches are synced for node config
	I1205 07:04:35.035910       1 shared_informer.go:318] Caches are synced for service config
	I1205 07:04:35.037108       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3e4212a588c7151e51e5770b8ef9f2f65174a107e6e9eda74c2de84d61fef51a] <==
	W1205 07:04:18.782054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 07:04:18.782077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 07:04:18.782140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 07:04:18.782188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 07:04:18.781975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 07:04:18.782231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 07:04:18.782148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 07:04:18.782255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 07:04:18.782404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 07:04:18.782423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 07:04:19.602443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 07:04:19.602480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 07:04:19.639878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 07:04:19.639908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 07:04:19.644051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 07:04:19.644082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 07:04:19.765590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 07:04:19.765645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 07:04:19.887960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 07:04:19.888076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 07:04:19.946283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 07:04:19.946313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 07:04:19.984966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 07:04:19.985012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1205 07:04:20.375180       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551414    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e48ecb2-f73b-4f7e-a021-0e33d12ef572-xtables-lock\") pod \"kube-proxy-98jls\" (UID: \"2e48ecb2-f73b-4f7e-a021-0e33d12ef572\") " pod="kube-system/kube-proxy-98jls"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551504    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ddfb2078-ed57-42bc-9f8a-448f7a54e6d4-cni-cfg\") pod \"kindnet-f9lmb\" (UID: \"ddfb2078-ed57-42bc-9f8a-448f7a54e6d4\") " pod="kube-system/kindnet-f9lmb"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551790    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62265\" (UniqueName: \"kubernetes.io/projected/2e48ecb2-f73b-4f7e-a021-0e33d12ef572-kube-api-access-62265\") pod \"kube-proxy-98jls\" (UID: \"2e48ecb2-f73b-4f7e-a021-0e33d12ef572\") " pod="kube-system/kube-proxy-98jls"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551828    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddfb2078-ed57-42bc-9f8a-448f7a54e6d4-xtables-lock\") pod \"kindnet-f9lmb\" (UID: \"ddfb2078-ed57-42bc-9f8a-448f7a54e6d4\") " pod="kube-system/kindnet-f9lmb"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551957    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddfb2078-ed57-42bc-9f8a-448f7a54e6d4-lib-modules\") pod \"kindnet-f9lmb\" (UID: \"ddfb2078-ed57-42bc-9f8a-448f7a54e6d4\") " pod="kube-system/kindnet-f9lmb"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.551986    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e48ecb2-f73b-4f7e-a021-0e33d12ef572-kube-proxy\") pod \"kube-proxy-98jls\" (UID: \"2e48ecb2-f73b-4f7e-a021-0e33d12ef572\") " pod="kube-system/kube-proxy-98jls"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.552120    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e48ecb2-f73b-4f7e-a021-0e33d12ef572-lib-modules\") pod \"kube-proxy-98jls\" (UID: \"2e48ecb2-f73b-4f7e-a021-0e33d12ef572\") " pod="kube-system/kube-proxy-98jls"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.552159    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvm8\" (UniqueName: \"kubernetes.io/projected/ddfb2078-ed57-42bc-9f8a-448f7a54e6d4-kube-api-access-fdvm8\") pod \"kindnet-f9lmb\" (UID: \"ddfb2078-ed57-42bc-9f8a-448f7a54e6d4\") " pod="kube-system/kindnet-f9lmb"
	Dec 05 07:04:34 old-k8s-version-874709 kubelet[1408]: I1205 07:04:34.937916    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-98jls" podStartSLOduration=0.937859459 podCreationTimestamp="2025-12-05 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:04:34.937470493 +0000 UTC m=+13.164115448" watchObservedRunningTime="2025-12-05 07:04:34.937859459 +0000 UTC m=+13.164504420"
	Dec 05 07:04:36 old-k8s-version-874709 kubelet[1408]: I1205 07:04:36.949214    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f9lmb" podStartSLOduration=1.025287592 podCreationTimestamp="2025-12-05 07:04:34 +0000 UTC" firstStartedPulling="2025-12-05 07:04:34.803869235 +0000 UTC m=+13.030514164" lastFinishedPulling="2025-12-05 07:04:36.727752345 +0000 UTC m=+14.954397273" observedRunningTime="2025-12-05 07:04:36.949051662 +0000 UTC m=+15.175696598" watchObservedRunningTime="2025-12-05 07:04:36.949170701 +0000 UTC m=+15.175815636"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.798095    1408 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.843178    1408 topology_manager.go:215] "Topology Admit Handler" podUID="adfb4a20-1e05-4379-89b3-ed0b9a5a4b73" podNamespace="kube-system" podName="coredns-5dd5756b68-srvvk"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: W1205 07:04:47.845823    1408 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-874709" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-874709' and this object
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: E1205 07:04:47.845873    1408 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-874709" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-874709' and this object
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.849605    1408 topology_manager.go:215] "Topology Admit Handler" podUID="c0d7103d-17fc-479f-8958-66bb01a59f8b" podNamespace="kube-system" podName="storage-provisioner"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.942005    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adfb4a20-1e05-4379-89b3-ed0b9a5a4b73-config-volume\") pod \"coredns-5dd5756b68-srvvk\" (UID: \"adfb4a20-1e05-4379-89b3-ed0b9a5a4b73\") " pod="kube-system/coredns-5dd5756b68-srvvk"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.942052    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpnzg\" (UniqueName: \"kubernetes.io/projected/c0d7103d-17fc-479f-8958-66bb01a59f8b-kube-api-access-zpnzg\") pod \"storage-provisioner\" (UID: \"c0d7103d-17fc-479f-8958-66bb01a59f8b\") " pod="kube-system/storage-provisioner"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.942075    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0d7103d-17fc-479f-8958-66bb01a59f8b-tmp\") pod \"storage-provisioner\" (UID: \"c0d7103d-17fc-479f-8958-66bb01a59f8b\") " pod="kube-system/storage-provisioner"
	Dec 05 07:04:47 old-k8s-version-874709 kubelet[1408]: I1205 07:04:47.942100    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jvm\" (UniqueName: \"kubernetes.io/projected/adfb4a20-1e05-4379-89b3-ed0b9a5a4b73-kube-api-access-s6jvm\") pod \"coredns-5dd5756b68-srvvk\" (UID: \"adfb4a20-1e05-4379-89b3-ed0b9a5a4b73\") " pod="kube-system/coredns-5dd5756b68-srvvk"
	Dec 05 07:04:48 old-k8s-version-874709 kubelet[1408]: I1205 07:04:48.968010    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.967955931 podCreationTimestamp="2025-12-05 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:04:48.967462212 +0000 UTC m=+27.194107148" watchObservedRunningTime="2025-12-05 07:04:48.967955931 +0000 UTC m=+27.194600935"
	Dec 05 07:04:49 old-k8s-version-874709 kubelet[1408]: E1205 07:04:49.043709    1408 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 05 07:04:49 old-k8s-version-874709 kubelet[1408]: E1205 07:04:49.043838    1408 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adfb4a20-1e05-4379-89b3-ed0b9a5a4b73-config-volume podName:adfb4a20-1e05-4379-89b3-ed0b9a5a4b73 nodeName:}" failed. No retries permitted until 2025-12-05 07:04:49.543807445 +0000 UTC m=+27.770452384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/adfb4a20-1e05-4379-89b3-ed0b9a5a4b73-config-volume") pod "coredns-5dd5756b68-srvvk" (UID: "adfb4a20-1e05-4379-89b3-ed0b9a5a4b73") : failed to sync configmap cache: timed out waiting for the condition
	Dec 05 07:04:49 old-k8s-version-874709 kubelet[1408]: I1205 07:04:49.974124    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-srvvk" podStartSLOduration=15.974071451 podCreationTimestamp="2025-12-05 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:04:49.973717251 +0000 UTC m=+28.200362202" watchObservedRunningTime="2025-12-05 07:04:49.974071451 +0000 UTC m=+28.200716387"
	Dec 05 07:04:52 old-k8s-version-874709 kubelet[1408]: I1205 07:04:52.186787    1408 topology_manager.go:215] "Topology Admit Handler" podUID="5446a9ce-ce83-4e1d-9425-c44cc40a4d5c" podNamespace="default" podName="busybox"
	Dec 05 07:04:52 old-k8s-version-874709 kubelet[1408]: I1205 07:04:52.267505    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6nbp\" (UniqueName: \"kubernetes.io/projected/5446a9ce-ce83-4e1d-9425-c44cc40a4d5c-kube-api-access-h6nbp\") pod \"busybox\" (UID: \"5446a9ce-ce83-4e1d-9425-c44cc40a4d5c\") " pod="default/busybox"
	
	
	==> storage-provisioner [9c33ed3daa9a0f84b20b7115f9386d24cf9005b8574f7a6fb7e1f14266b0c498] <==
	I1205 07:04:48.233611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:04:48.244491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:04:48.244530       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 07:04:48.252169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:04:48.252378       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_35f24bd8-af19-4e76-b1db-d09d27297020!
	I1205 07:04:48.252362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064af1fe-2240-4284-9f0a-716d2b949fbe", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-874709_35f24bd8-af19-4e76-b1db-d09d27297020 became leader
	I1205 07:04:48.352926       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_35f24bd8-af19-4e76-b1db-d09d27297020!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-874709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.517452ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:05:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-008839 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-008839 describe deploy/metrics-server -n kube-system: exit status 1 (57.755628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-008839 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-008839
helpers_test.go:243: (dbg) docker inspect no-preload-008839:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	        "Created": "2025-12-05T07:04:31.584731019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344270,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:04:31.616866103Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hosts",
	        "LogPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55-json.log",
	        "Name": "/no-preload-008839",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-008839:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-008839",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	                "LowerDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-008839",
	                "Source": "/var/lib/docker/volumes/no-preload-008839/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-008839",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-008839",
	                "name.minikube.sigs.k8s.io": "no-preload-008839",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f7cd5d5f9ae9729b498dd25ce1c1308e57f788e143ce4d127475f29db002eab4",
	            "SandboxKey": "/var/run/docker/netns/f7cd5d5f9ae9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-008839": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b424bb5358c0ff78bed421f719287c2770f3aa97ebe3ad623f9f893abf37a15e",
	                    "EndpointID": "cd82d73d35adcbc65706a4750d74741399106450f95dc66248c537a8c2af5361",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:9b:52:43:86:2a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-008839",
	                        "9ca1114060bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008839 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-397607 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo docker system info                                                                                                                                                                                                      │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                                                                                                  │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                               │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:05:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:05:18.732725  361350 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:05:18.733368  361350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:18.733384  361350 out.go:374] Setting ErrFile to fd 2...
	I1205 07:05:18.733392  361350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:18.734057  361350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:05:18.734618  361350 out.go:368] Setting JSON to false
	I1205 07:05:18.735775  361350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6463,"bootTime":1764911856,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:05:18.735858  361350 start.go:143] virtualization: kvm guest
	I1205 07:05:18.737474  361350 out.go:179] * [old-k8s-version-874709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:05:18.738484  361350 notify.go:221] Checking for updates...
	I1205 07:05:18.738505  361350 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:05:18.739542  361350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:05:18.740672  361350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:18.741792  361350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:05:18.742805  361350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:05:18.743806  361350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:05:18.745106  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:18.746724  361350 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1205 07:05:18.747697  361350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:05:18.771894  361350 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:05:18.771966  361350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:05:18.833543  361350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:05:18.822119874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:05:18.833722  361350 docker.go:319] overlay module found
	I1205 07:05:18.836020  361350 out.go:179] * Using the docker driver based on existing profile
	I1205 07:05:18.837044  361350 start.go:309] selected driver: docker
	I1205 07:05:18.837059  361350 start.go:927] validating driver "docker" against &{Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:18.837147  361350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:05:18.837918  361350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:05:18.902352  361350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:05:18.893064552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:05:18.902717  361350 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:18.902759  361350 cni.go:84] Creating CNI manager for ""
	I1205 07:05:18.902833  361350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:05:18.902885  361350 start.go:353] cluster config:
	{Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:18.904269  361350 out.go:179] * Starting "old-k8s-version-874709" primary control-plane node in "old-k8s-version-874709" cluster
	I1205 07:05:18.905376  361350 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:05:18.906871  361350 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:05:18.907897  361350 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 07:05:18.907932  361350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1205 07:05:18.907956  361350 cache.go:65] Caching tarball of preloaded images
	I1205 07:05:18.908007  361350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:05:18.908072  361350 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:05:18.908089  361350 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1205 07:05:18.908227  361350 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/config.json ...
	I1205 07:05:18.929964  361350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:05:18.929984  361350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:05:18.929998  361350 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:05:18.930029  361350 start.go:360] acquireMachinesLock for old-k8s-version-874709: {Name:mk958e6ec1b48ba175b34133d850223c6d6a6548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:05:18.930090  361350 start.go:364] duration metric: took 40.527µs to acquireMachinesLock for "old-k8s-version-874709"
	I1205 07:05:18.930112  361350 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:05:18.930120  361350 fix.go:54] fixHost starting: 
	I1205 07:05:18.930395  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:18.946452  361350 fix.go:112] recreateIfNeeded on old-k8s-version-874709: state=Stopped err=<nil>
	W1205 07:05:18.946475  361350 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:05:16.560664  343486 node_ready.go:57] node "no-preload-008839" has "Ready":"False" status (will retry)
	W1205 07:05:18.561164  343486 node_ready.go:57] node "no-preload-008839" has "Ready":"False" status (will retry)
	I1205 07:05:19.070821  343486 node_ready.go:49] node "no-preload-008839" is "Ready"
	I1205 07:05:19.070909  343486 node_ready.go:38] duration metric: took 13.013062292s for node "no-preload-008839" to be "Ready" ...
	I1205 07:05:19.070954  343486 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:05:19.071049  343486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:05:19.083748  343486 api_server.go:72] duration metric: took 13.322120804s to wait for apiserver process to appear ...
	I1205 07:05:19.083809  343486 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:05:19.083825  343486 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:05:19.088106  343486 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:05:19.089182  343486 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:05:19.089208  343486 api_server.go:131] duration metric: took 5.392067ms to wait for apiserver health ...
	I1205 07:05:19.089219  343486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:19.092755  343486 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:19.092782  343486 system_pods.go:61] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.092789  343486 system_pods.go:61] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.092795  343486 system_pods.go:61] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.092799  343486 system_pods.go:61] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.092803  343486 system_pods.go:61] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.092805  343486 system_pods.go:61] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.092808  343486 system_pods.go:61] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.092813  343486 system_pods.go:61] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.092818  343486 system_pods.go:74] duration metric: took 3.593925ms to wait for pod list to return data ...
	I1205 07:05:19.092824  343486 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:19.095219  343486 default_sa.go:45] found service account: "default"
	I1205 07:05:19.095276  343486 default_sa.go:55] duration metric: took 2.41461ms for default service account to be created ...
	I1205 07:05:19.095292  343486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:19.097894  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.097926  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.097934  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.097943  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.097954  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.097960  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.097965  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.097971  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.097979  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.098006  343486 retry.go:31] will retry after 187.583527ms: missing components: kube-dns
	I1205 07:05:19.289459  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.289489  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.289495  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.289501  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.289505  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.289509  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.289514  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.289518  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.289523  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.289540  343486 retry.go:31] will retry after 293.191566ms: missing components: kube-dns
	I1205 07:05:19.586140  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.586170  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.586177  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.586182  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.586186  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.586191  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.586194  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.586198  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.586202  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.586216  343486 retry.go:31] will retry after 446.49776ms: missing components: kube-dns
	I1205 07:05:20.037104  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:20.037133  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running
	I1205 07:05:20.037140  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:20.037144  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:20.037148  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:20.037152  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:20.037155  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:20.037158  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:20.037161  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running
	I1205 07:05:20.037167  343486 system_pods.go:126] duration metric: took 941.870246ms to wait for k8s-apps to be running ...
	I1205 07:05:20.037174  343486 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:20.037213  343486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:20.050812  343486 system_svc.go:56] duration metric: took 13.630899ms WaitForService to wait for kubelet
	I1205 07:05:20.050836  343486 kubeadm.go:587] duration metric: took 14.289211279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:20.050852  343486 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:20.053182  343486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:20.053206  343486 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:20.053228  343486 node_conditions.go:105] duration metric: took 2.370253ms to run NodePressure ...
	I1205 07:05:20.053242  343486 start.go:242] waiting for startup goroutines ...
	I1205 07:05:20.053255  343486 start.go:247] waiting for cluster config update ...
	I1205 07:05:20.053272  343486 start.go:256] writing updated cluster config ...
	I1205 07:05:20.053567  343486 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:20.057994  343486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:20.061040  343486 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.065075  343486 pod_ready.go:94] pod "coredns-7d764666f9-bvbhf" is "Ready"
	I1205 07:05:20.065091  343486 pod_ready.go:86] duration metric: took 4.033756ms for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.066866  343486 pod_ready.go:83] waiting for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.072209  343486 pod_ready.go:94] pod "etcd-no-preload-008839" is "Ready"
	I1205 07:05:20.072224  343486 pod_ready.go:86] duration metric: took 5.344359ms for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.073802  343486 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.076905  343486 pod_ready.go:94] pod "kube-apiserver-no-preload-008839" is "Ready"
	I1205 07:05:20.076918  343486 pod_ready.go:86] duration metric: took 3.100692ms for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.078571  343486 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.461552  343486 pod_ready.go:94] pod "kube-controller-manager-no-preload-008839" is "Ready"
	I1205 07:05:20.461587  343486 pod_ready.go:86] duration metric: took 382.993872ms for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.662226  343486 pod_ready.go:83] waiting for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.061491  343486 pod_ready.go:94] pod "kube-proxy-s9zn2" is "Ready"
	I1205 07:05:21.061522  343486 pod_ready.go:86] duration metric: took 399.270874ms for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.262993  343486 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.662454  343486 pod_ready.go:94] pod "kube-scheduler-no-preload-008839" is "Ready"
	I1205 07:05:21.662476  343486 pod_ready.go:86] duration metric: took 399.457643ms for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.662488  343486 pod_ready.go:40] duration metric: took 1.604463442s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:21.709168  343486 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:05:21.710903  343486 out.go:179] * Done! kubectl is now configured to use "no-preload-008839" cluster and "default" namespace by default
	W1205 07:05:18.308653  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:20.808836  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	I1205 07:05:18.809545  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:19.309502  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:19.808972  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:20.309569  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:20.809444  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.309261  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.808758  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.887928  355650 kubeadm.go:1114] duration metric: took 5.16553977s to wait for elevateKubeSystemPrivileges
	I1205 07:05:21.887963  355650 kubeadm.go:403] duration metric: took 18.370040269s to StartCluster
	I1205 07:05:21.887978  355650 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:21.888036  355650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:21.889657  355650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:21.889879  355650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:05:21.889898  355650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:05:21.889944  355650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:05:21.890067  355650 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-172186"
	I1205 07:05:21.890077  355650 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:05:21.890086  355650 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-172186"
	I1205 07:05:21.890157  355650 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:05:21.890110  355650 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-172186"
	I1205 07:05:21.890200  355650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-172186"
	I1205 07:05:21.890581  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.890735  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.891115  355650 out.go:179] * Verifying Kubernetes components...
	I1205 07:05:21.892428  355650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:21.914951  355650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:05:21.916220  355650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:21.916252  355650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:05:21.916384  355650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:05:21.916938  355650 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-172186"
	I1205 07:05:21.917045  355650 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:05:21.917530  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.946096  355650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:21.946122  355650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:05:21.946195  355650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:05:21.947726  355650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:05:21.968188  355650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:05:21.988791  355650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:05:22.046493  355650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:22.065706  355650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:22.081514  355650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:22.161085  355650 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1205 07:05:22.162807  355650 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:05:22.359441  355650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 07:05:22.360384  355650 addons.go:530] duration metric: took 470.438863ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 07:05:22.665477  355650 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-172186" context rescaled to 1 replicas
	I1205 07:05:18.947912  361350 out.go:252] * Restarting existing docker container for "old-k8s-version-874709" ...
	I1205 07:05:18.947975  361350 cli_runner.go:164] Run: docker start old-k8s-version-874709
	I1205 07:05:19.216592  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:19.236196  361350 kic.go:430] container "old-k8s-version-874709" state is running.
	I1205 07:05:19.236585  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:19.254644  361350 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/config.json ...
	I1205 07:05:19.254833  361350 machine.go:94] provisionDockerMachine start ...
	I1205 07:05:19.254892  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:19.273302  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:19.273572  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:19.273587  361350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:05:19.274189  361350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40708->127.0.0.1:33113: read: connection reset by peer
	I1205 07:05:22.421014  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-874709
	
	I1205 07:05:22.421044  361350 ubuntu.go:182] provisioning hostname "old-k8s-version-874709"
	I1205 07:05:22.421104  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.439815  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.440029  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.440045  361350 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-874709 && echo "old-k8s-version-874709" | sudo tee /etc/hostname
	I1205 07:05:22.588524  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-874709
	
	I1205 07:05:22.588613  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.607348  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.607657  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.607686  361350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-874709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-874709/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-874709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:05:22.746318  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:05:22.746359  361350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:05:22.746419  361350 ubuntu.go:190] setting up certificates
	I1205 07:05:22.746433  361350 provision.go:84] configureAuth start
	I1205 07:05:22.746497  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:22.769297  361350 provision.go:143] copyHostCerts
	I1205 07:05:22.769433  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:05:22.769446  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:05:22.769527  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:05:22.769673  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:05:22.769682  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:05:22.769736  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:05:22.769834  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:05:22.769841  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:05:22.769878  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:05:22.769964  361350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-874709 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-874709]
	I1205 07:05:22.798244  361350 provision.go:177] copyRemoteCerts
	I1205 07:05:22.798335  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:05:22.798385  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.818892  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:22.922010  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:05:22.941008  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 07:05:22.957532  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:05:22.974225  361350 provision.go:87] duration metric: took 227.777325ms to configureAuth
	I1205 07:05:22.974248  361350 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:05:22.974425  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:22.974533  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.991889  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.992114  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.992136  361350 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:05:23.303830  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:05:23.303857  361350 machine.go:97] duration metric: took 4.049011499s to provisionDockerMachine
	I1205 07:05:23.303870  361350 start.go:293] postStartSetup for "old-k8s-version-874709" (driver="docker")
	I1205 07:05:23.303884  361350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:05:23.303945  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:05:23.303992  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.323474  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.420939  361350 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:05:23.424400  361350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:05:23.424425  361350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:05:23.424434  361350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:05:23.424475  361350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:05:23.424544  361350 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:05:23.424647  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:05:23.432359  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:05:23.450500  361350 start.go:296] duration metric: took 146.61716ms for postStartSetup
	I1205 07:05:23.450566  361350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:05:23.450600  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.469665  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.564097  361350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:05:23.568533  361350 fix.go:56] duration metric: took 4.638408012s for fixHost
	I1205 07:05:23.568561  361350 start.go:83] releasing machines lock for "old-k8s-version-874709", held for 4.6384531s
	I1205 07:05:23.568621  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:23.586088  361350 ssh_runner.go:195] Run: cat /version.json
	I1205 07:05:23.586152  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.586186  361350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:05:23.586292  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.604172  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.605418  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.755814  361350 ssh_runner.go:195] Run: systemctl --version
	I1205 07:05:23.762465  361350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:05:23.794667  361350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:05:23.799124  361350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:05:23.799182  361350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:05:23.807255  361350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:05:23.807272  361350 start.go:496] detecting cgroup driver to use...
	I1205 07:05:23.807297  361350 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:05:23.807348  361350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:05:23.822039  361350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:05:23.834011  361350 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:05:23.834057  361350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:05:23.847299  361350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:05:23.859584  361350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:05:23.938418  361350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:05:24.013904  361350 docker.go:234] disabling docker service ...
	I1205 07:05:24.013969  361350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:05:24.028379  361350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:05:24.039619  361350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:05:24.114233  361350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:05:24.199092  361350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:05:24.211045  361350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:05:24.224645  361350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 07:05:24.224694  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.233803  361350 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:05:24.233849  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.241986  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.250089  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.258031  361350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:05:24.265526  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.273467  361350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.281040  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.288986  361350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:05:24.295577  361350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:05:24.302149  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:24.381683  361350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:05:24.513406  361350 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:05:24.513468  361350 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:05:24.517306  361350 start.go:564] Will wait 60s for crictl version
	I1205 07:05:24.517376  361350 ssh_runner.go:195] Run: which crictl
	I1205 07:05:24.521188  361350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:05:24.546369  361350 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:05:24.546458  361350 ssh_runner.go:195] Run: crio --version
	I1205 07:05:24.573084  361350 ssh_runner.go:195] Run: crio --version
	I1205 07:05:24.600432  361350 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1205 07:05:24.601389  361350 cli_runner.go:164] Run: docker network inspect old-k8s-version-874709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:05:24.618878  361350 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:05:24.622703  361350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:05:24.633060  361350 kubeadm.go:884] updating cluster {Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:05:24.633187  361350 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 07:05:24.633224  361350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:05:24.666471  361350 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:05:24.666489  361350 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:05:24.666530  361350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:05:24.691979  361350 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:05:24.691997  361350 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:05:24.692003  361350 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1205 07:05:24.692100  361350 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-874709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:05:24.692154  361350 ssh_runner.go:195] Run: crio config
	I1205 07:05:24.736227  361350 cni.go:84] Creating CNI manager for ""
	I1205 07:05:24.736246  361350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:05:24.736257  361350 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:05:24.736296  361350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-874709 NodeName:old-k8s-version-874709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:05:24.736499  361350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-874709"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:05:24.736582  361350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1205 07:05:24.744933  361350 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:05:24.744990  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:05:24.752803  361350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1205 07:05:24.765404  361350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:05:24.777441  361350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1205 07:05:24.788985  361350 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:05:24.792622  361350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:05:24.801914  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:24.881903  361350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:24.904484  361350 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709 for IP: 192.168.103.2
	I1205 07:05:24.904501  361350 certs.go:195] generating shared ca certs ...
	I1205 07:05:24.904516  361350 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:24.904644  361350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:05:24.904702  361350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:05:24.904714  361350 certs.go:257] generating profile certs ...
	I1205 07:05:24.904820  361350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/client.key
	I1205 07:05:24.904873  361350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.key.8f229178
	I1205 07:05:24.904914  361350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.key
	I1205 07:05:24.905017  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:05:24.905052  361350 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:05:24.905062  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:05:24.905090  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:05:24.905113  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:05:24.905138  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:05:24.905177  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:05:24.905830  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:05:24.923443  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:05:24.941714  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:05:24.959879  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:05:24.980027  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 07:05:25.000656  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:05:25.017112  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:05:25.033266  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:05:25.050195  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:05:25.067094  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:05:25.083486  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:05:25.100826  361350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:05:25.113370  361350 ssh_runner.go:195] Run: openssl version
	I1205 07:05:25.119213  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.125963  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:05:25.132814  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.136109  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.136143  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.172312  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:05:25.179748  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.186716  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:05:25.193521  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.196840  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.196881  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.231668  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:05:25.238492  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.245470  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:05:25.253450  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.256888  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.256923  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.293464  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:05:25.300763  361350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:05:25.304313  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:05:25.338602  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:05:25.373006  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:05:25.416283  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:05:25.460378  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:05:25.499869  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:05:25.556132  361350 kubeadm.go:401] StartCluster: {Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:25.556237  361350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:05:25.556309  361350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:05:25.591064  361350 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:05:25.591095  361350 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:05:25.591120  361350 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:05:25.591125  361350 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:05:25.591130  361350 cri.go:89] found id: ""
	I1205 07:05:25.591174  361350 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:05:25.603289  361350 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:05:25Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:05:25.603376  361350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:05:25.611519  361350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:05:25.611537  361350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:05:25.611591  361350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:05:25.619110  361350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:05:25.620504  361350 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-874709" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:25.621490  361350 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-874709" cluster setting kubeconfig missing "old-k8s-version-874709" context setting]
	I1205 07:05:25.622478  361350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.624299  361350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:05:25.631529  361350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:05:25.631560  361350 kubeadm.go:602] duration metric: took 20.012895ms to restartPrimaryControlPlane
	I1205 07:05:25.631567  361350 kubeadm.go:403] duration metric: took 75.4473ms to StartCluster
	I1205 07:05:25.631579  361350 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.631630  361350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:25.633577  361350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.633800  361350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:05:25.633873  361350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:05:25.633957  361350 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-874709"
	I1205 07:05:25.633986  361350 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-874709"
	W1205 07:05:25.633998  361350 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:05:25.634027  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.634090  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:25.634226  361350 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-874709"
	I1205 07:05:25.634255  361350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-874709"
	I1205 07:05:25.634617  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.634647  361350 addons.go:70] Setting dashboard=true in profile "old-k8s-version-874709"
	I1205 07:05:25.634668  361350 addons.go:239] Setting addon dashboard=true in "old-k8s-version-874709"
	W1205 07:05:25.634677  361350 addons.go:248] addon dashboard should already be in state true
	I1205 07:05:25.634790  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.634622  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.635384  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.635926  361350 out.go:179] * Verifying Kubernetes components...
	I1205 07:05:25.637108  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:25.662647  361350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:05:25.662647  361350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:05:25.662907  361350 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-874709"
	W1205 07:05:25.662924  361350 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:05:25.662945  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.663490  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.663880  361350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:25.663897  361350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:05:25.663943  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.664835  361350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:05:22.808920  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:25.308649  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	I1205 07:05:25.668192  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:05:25.668210  361350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:05:25.668276  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.698069  361350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:25.698138  361350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:05:25.698199  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.703843  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.706842  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.721133  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.806050  361350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:25.823463  361350 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-874709" to be "Ready" ...
	I1205 07:05:25.824054  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:05:25.824073  361350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:05:25.826097  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:25.838375  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:25.838587  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:05:25.838605  361350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:05:25.855172  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:05:25.855191  361350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:05:25.871286  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:05:25.871304  361350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:05:25.889076  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:05:25.889095  361350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:05:25.905311  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:05:25.905354  361350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:05:25.921271  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:05:25.921293  361350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:05:25.935244  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:05:25.935266  361350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:05:25.947955  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:05:25.947970  361350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:05:25.960865  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:05:27.605896  361350 node_ready.go:49] node "old-k8s-version-874709" is "Ready"
	I1205 07:05:27.605933  361350 node_ready.go:38] duration metric: took 1.782430557s for node "old-k8s-version-874709" to be "Ready" ...
	I1205 07:05:27.605949  361350 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:05:27.605999  361350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:05:28.232682  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.406554683s)
	I1205 07:05:28.232743  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.394340724s)
	I1205 07:05:28.557259  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.596325785s)
	I1205 07:05:28.557292  361350 api_server.go:72] duration metric: took 2.923453611s to wait for apiserver process to appear ...
	I1205 07:05:28.557313  361350 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:05:28.557355  361350 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:05:28.558583  361350 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-874709 addons enable metrics-server
	
	I1205 07:05:28.559814  361350 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1205 07:05:24.166260  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	W1205 07:05:26.665821  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	I1205 07:05:28.561201  361350 addons.go:530] duration metric: took 2.927329498s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1205 07:05:28.563129  361350 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 07:05:28.563152  361350 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Dec 05 07:05:19 no-preload-008839 crio[770]: time="2025-12-05T07:05:19.016923222Z" level=info msg="Starting container: 01f0fb8251c93bbb60faf93b6d28ed3d1df41f456481e5a2d9693d86e82798fb" id=45758f05-89f7-49d6-af5d-8de11dc6c98d name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:19 no-preload-008839 crio[770]: time="2025-12-05T07:05:19.018780477Z" level=info msg="Started container" PID=2819 containerID=01f0fb8251c93bbb60faf93b6d28ed3d1df41f456481e5a2d9693d86e82798fb description=kube-system/coredns-7d764666f9-bvbhf/coredns id=45758f05-89f7-49d6-af5d-8de11dc6c98d name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb0abbd04e7d0ce81d6505afc0063f7a3e608808bc4cf53243615db095f58a1d
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.174415775Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d22e87a9-7a99-4e41-bdf1-7887292ffe98 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.174512916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.182126559Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:848ca92a2f8ee82adeef5e87abfdd082b90db2ede7660cfa247e79093fef899f UID:77583b71-31d4-4d4c-8696-58ffa671159e NetNS:/var/run/netns/fb68fd44-731e-4469-b465-dc86c3bc4a02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00033af88}] Aliases:map[]}"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.182167172Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.196035109Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:848ca92a2f8ee82adeef5e87abfdd082b90db2ede7660cfa247e79093fef899f UID:77583b71-31d4-4d4c-8696-58ffa671159e NetNS:/var/run/netns/fb68fd44-731e-4469-b465-dc86c3bc4a02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00033af88}] Aliases:map[]}"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.19621562Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.197410942Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.198610174Z" level=info msg="Ran pod sandbox 848ca92a2f8ee82adeef5e87abfdd082b90db2ede7660cfa247e79093fef899f with infra container: default/busybox/POD" id=d22e87a9-7a99-4e41-bdf1-7887292ffe98 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.200705196Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ac02e16-ecc9-4c66-b683-d9bc2dfa84c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.200842761Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0ac02e16-ecc9-4c66-b683-d9bc2dfa84c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.200888448Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0ac02e16-ecc9-4c66-b683-d9bc2dfa84c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.20167763Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56c722c1-709a-459b-82b5-8db543323fbb name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.204789734Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.892535776Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=56c722c1-709a-459b-82b5-8db543323fbb name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.893149275Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3e4d149e-aa66-42cd-88e6-0386f225f3d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.894574067Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=acb3c7e7-fdd8-4092-ab06-c2c985c7d114 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.897398885Z" level=info msg="Creating container: default/busybox/busybox" id=dde8e9b2-6ab9-4c7b-b42a-548f41873056 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.897513043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.900753628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.901251712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.926947608Z" level=info msg="Created container 418be5e9338d220b172a94503da839fc3cc3b4bcce6ae6b3a160d4fd4c88eb4f: default/busybox/busybox" id=dde8e9b2-6ab9-4c7b-b42a-548f41873056 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.927502353Z" level=info msg="Starting container: 418be5e9338d220b172a94503da839fc3cc3b4bcce6ae6b3a160d4fd4c88eb4f" id=703072c6-9064-42b7-a6dd-e33dba82b763 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:22 no-preload-008839 crio[770]: time="2025-12-05T07:05:22.929584755Z" level=info msg="Started container" PID=2894 containerID=418be5e9338d220b172a94503da839fc3cc3b4bcce6ae6b3a160d4fd4c88eb4f description=default/busybox/busybox id=703072c6-9064-42b7-a6dd-e33dba82b763 name=/runtime.v1.RuntimeService/StartContainer sandboxID=848ca92a2f8ee82adeef5e87abfdd082b90db2ede7660cfa247e79093fef899f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	418be5e9338d2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   848ca92a2f8ee       busybox                                     default
	01f0fb8251c93       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   bb0abbd04e7d0       coredns-7d764666f9-bvbhf                    kube-system
	53ff897fba0d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d4e1e4538c3f9       storage-provisioner                         kube-system
	7119b7c08d392       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   49b9ab1b73444       kindnet-k65q9                               kube-system
	c8b5ffd132329       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   f7665bea281d0       kube-proxy-s9zn2                            kube-system
	cc0cae03ab7a8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      37 seconds ago      Running             kube-apiserver            0                   1128fedc33fa4       kube-apiserver-no-preload-008839            kube-system
	547205b8a0558       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      37 seconds ago      Running             etcd                      0                   7deefd6a1b749       etcd-no-preload-008839                      kube-system
	2e931f274a615       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      37 seconds ago      Running             kube-scheduler            0                   8582765ecb6dc       kube-scheduler-no-preload-008839            kube-system
	b8234a7722c4c       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      37 seconds ago      Running             kube-controller-manager   0                   1cd32b95f9882       kube-controller-manager-no-preload-008839   kube-system
	
	
	==> coredns [01f0fb8251c93bbb60faf93b6d28ed3d1df41f456481e5a2d9693d86e82798fb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33560 - 43903 "HINFO IN 654403502578241393.142957523705339921. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.056582023s
	
	
	==> describe nodes <==
	Name:               no-preload-008839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-008839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=no-preload-008839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-008839
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:05:30 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:05:30 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:05:30 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:05:30 +0000   Fri, 05 Dec 2025 07:05:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-008839
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                fb2974e4-0c42-4f11-b1e5-d1c92fcbd635
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-bvbhf                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-008839                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-k65q9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-008839             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-008839    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-s9zn2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-008839             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-008839 event: Registered Node no-preload-008839 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [547205b8a055813f19a62a340e44addb37ac16d03473acb15d444bdd18f06323] <==
	{"level":"warn","ts":"2025-12-05T07:04:54.908940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.915889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.923266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.931793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.939200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.948092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.957001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.965206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.987900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:54.995366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:55.003951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:55.012088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:04:55.064090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46636","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T07:04:56.865553Z","caller":"traceutil/trace.go:172","msg":"trace[1197032007] transaction","detail":"{read_only:false; response_revision:116; number_of_response:1; }","duration":"135.717264ms","start":"2025-12-05T07:04:56.729818Z","end":"2025-12-05T07:04:56.865536Z","steps":["trace[1197032007] 'process raft request'  (duration: 64.723391ms)","trace[1197032007] 'compare'  (duration: 70.891398ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T07:04:57.137453Z","caller":"traceutil/trace.go:172","msg":"trace[1914243815] transaction","detail":"{read_only:false; response_revision:119; number_of_response:1; }","duration":"130.752293ms","start":"2025-12-05T07:04:57.006680Z","end":"2025-12-05T07:04:57.137432Z","steps":["trace[1914243815] 'process raft request'  (duration: 130.617821ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T07:04:57.273160Z","caller":"traceutil/trace.go:172","msg":"trace[1038096805] transaction","detail":"{read_only:false; response_revision:120; number_of_response:1; }","duration":"131.538093ms","start":"2025-12-05T07:04:57.141600Z","end":"2025-12-05T07:04:57.273138Z","steps":["trace[1038096805] 'process raft request'  (duration: 126.255439ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T07:04:57.400058Z","caller":"traceutil/trace.go:172","msg":"trace[894136442] transaction","detail":"{read_only:false; response_revision:121; number_of_response:1; }","duration":"122.826749ms","start":"2025-12-05T07:04:57.277209Z","end":"2025-12-05T07:04:57.400036Z","steps":["trace[894136442] 'process raft request'  (duration: 120.876895ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T07:04:57.579751Z","caller":"traceutil/trace.go:172","msg":"trace[1364686697] transaction","detail":"{read_only:false; response_revision:122; number_of_response:1; }","duration":"175.459857ms","start":"2025-12-05T07:04:57.404273Z","end":"2025-12-05T07:04:57.579732Z","steps":["trace[1364686697] 'process raft request'  (duration: 145.036374ms)","trace[1364686697] 'compare'  (duration: 30.33244ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T07:04:57.885043Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.424256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-05T07:04:57.885119Z","caller":"traceutil/trace.go:172","msg":"trace[1567097283] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:124; }","duration":"185.513368ms","start":"2025-12-05T07:04:57.699591Z","end":"2025-12-05T07:04:57.885104Z","steps":["trace[1567097283] 'agreement among raft nodes before linearized reading'  (duration: 25.060413ms)","trace[1567097283] 'range keys from in-memory index tree'  (duration: 160.331892ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T07:04:57.885462Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.421269ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597514669364851 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:service-account-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:service-account-controller\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-05T07:04:57.885534Z","caller":"traceutil/trace.go:172","msg":"trace[1428359204] transaction","detail":"{read_only:false; response_revision:125; number_of_response:1; }","duration":"278.669449ms","start":"2025-12-05T07:04:57.606855Z","end":"2025-12-05T07:04:57.885524Z","steps":["trace[1428359204] 'process raft request'  (duration: 117.823133ms)","trace[1428359204] 'compare'  (duration: 160.306993ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T07:04:58.013456Z","caller":"traceutil/trace.go:172","msg":"trace[218282334] transaction","detail":"{read_only:false; response_revision:126; number_of_response:1; }","duration":"124.332437ms","start":"2025-12-05T07:04:57.889103Z","end":"2025-12-05T07:04:58.013436Z","steps":["trace[218282334] 'process raft request'  (duration: 123.435784ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T07:04:58.158498Z","caller":"traceutil/trace.go:172","msg":"trace[4676456] transaction","detail":"{read_only:false; response_revision:127; number_of_response:1; }","duration":"141.238037ms","start":"2025-12-05T07:04:58.017215Z","end":"2025-12-05T07:04:58.158454Z","steps":["trace[4676456] 'process raft request'  (duration: 141.098799ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T07:04:58.598795Z","caller":"traceutil/trace.go:172","msg":"trace[903956661] transaction","detail":"{read_only:false; response_revision:137; number_of_response:1; }","duration":"124.944287ms","start":"2025-12-05T07:04:58.473835Z","end":"2025-12-05T07:04:58.598779Z","steps":["trace[903956661] 'process raft request'  (duration: 122.241986ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:05:31 up  1:47,  0 user,  load average: 4.76, 3.38, 2.18
	Linux no-preload-008839 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7119b7c08d392a24134c994fa255225cb1b156ee4ab6ec9aab657939f2666270] <==
	I1205 07:05:08.002683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:08.002993       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:05:08.003177       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:08.003199       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:08.003228       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:08.207532       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:08.207682       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:08.207711       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:08.300205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:08.607886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:08.607914       1 metrics.go:72] Registering metrics
	I1205 07:05:08.608013       1 controller.go:711] "Syncing nftables rules"
	I1205 07:05:18.208240       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:05:18.208308       1 main.go:301] handling current node
	I1205 07:05:28.207296       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:05:28.207343       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc0cae03ab7a8243837c04437cfbb5b803e74332fa9abb4f2c6d21a292753460] <==
	I1205 07:04:55.599373       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 07:04:55.599429       1 aggregator.go:187] initial CRD sync complete...
	I1205 07:04:55.599443       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:04:55.599451       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:04:55.599459       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:04:55.610835       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:04:55.797553       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:04:56.500806       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1205 07:04:56.505678       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1205 07:04:56.505695       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:04:58.805547       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:04:58.847588       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:04:58.912172       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 07:04:58.920777       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1205 07:04:58.922027       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:04:58.927744       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:04:59.526019       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:04:59.707918       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:04:59.723241       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 07:04:59.733657       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 07:05:04.983621       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:04.988615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:05.076573       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:05:05.523409       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1205 07:05:29.953116       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:57682: use of closed network connection
	
	
	==> kube-controller-manager [b8234a7722c4c38e72f9257f946ec5e63909f6dc668022818ce67ee534da2ade] <==
	I1205 07:05:04.333539       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.333704       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1205 07:05:04.333855       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-008839"
	I1205 07:05:04.333937       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1205 07:05:04.334236       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.333462       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.334702       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:04.336383       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.338024       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.338766       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.339438       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.339831       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.340014       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.333427       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.344458       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.345269       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.346757       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.347448       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.348700       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-008839" podCIDRs=["10.244.0.0/24"]
	I1205 07:05:04.356034       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.435351       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.439543       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:04.439563       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:05:04.439568       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:05:19.336998       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c8b5ffd132329c0ae559b1a88b4983f1d7c00b383eba89db7170497f7354b9b6] <==
	I1205 07:05:06.004990       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:05:06.066748       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:06.168027       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:06.168077       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:05:06.168185       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:05:06.192123       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:06.192183       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:05:06.198477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:05:06.198996       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:05:06.199084       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:06.200770       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:05:06.201109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:05:06.200869       1 config.go:309] "Starting node config controller"
	I1205 07:05:06.201210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:05:06.201236       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:05:06.200870       1 config.go:200] "Starting service config controller"
	I1205 07:05:06.201281       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:05:06.200888       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:05:06.201712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:05:06.301964       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:05:06.301980       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:05:06.302002       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2e931f274a6156c75c9923e3cfd82ab74a633bd22bc8d4392e924fcb78a1239c] <==
	E1205 07:04:56.828022       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1205 07:04:56.828771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1205 07:04:56.971392       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:04:56.972371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1205 07:04:57.021491       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 07:04:57.022386       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1205 07:04:57.038354       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:04:57.039067       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1205 07:04:57.051952       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1205 07:04:57.052825       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1205 07:04:57.114362       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1205 07:04:57.115136       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1205 07:04:57.131098       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 07:04:57.131936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1205 07:04:57.133769       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1205 07:04:57.134516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1205 07:04:57.142599       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1205 07:04:57.143403       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1205 07:04:58.187228       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1205 07:04:58.188425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1205 07:04:58.294925       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1205 07:04:58.295928       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1205 07:04:58.515503       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:04:58.516479       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	I1205 07:05:00.354490       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613273    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73b9d6c5-f629-4c51-943c-fd18a048eae2-lib-modules\") pod \"kube-proxy-s9zn2\" (UID: \"73b9d6c5-f629-4c51-943c-fd18a048eae2\") " pod="kube-system/kube-proxy-s9zn2"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613297    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60bf9fdc-755d-4308-bf58-4a3d3459eddb-cni-cfg\") pod \"kindnet-k65q9\" (UID: \"60bf9fdc-755d-4308-bf58-4a3d3459eddb\") " pod="kube-system/kindnet-k65q9"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613406    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nzvl\" (UniqueName: \"kubernetes.io/projected/73b9d6c5-f629-4c51-943c-fd18a048eae2-kube-api-access-9nzvl\") pod \"kube-proxy-s9zn2\" (UID: \"73b9d6c5-f629-4c51-943c-fd18a048eae2\") " pod="kube-system/kube-proxy-s9zn2"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613446    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60bf9fdc-755d-4308-bf58-4a3d3459eddb-xtables-lock\") pod \"kindnet-k65q9\" (UID: \"60bf9fdc-755d-4308-bf58-4a3d3459eddb\") " pod="kube-system/kindnet-k65q9"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613485    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73b9d6c5-f629-4c51-943c-fd18a048eae2-xtables-lock\") pod \"kube-proxy-s9zn2\" (UID: \"73b9d6c5-f629-4c51-943c-fd18a048eae2\") " pod="kube-system/kube-proxy-s9zn2"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: I1205 07:05:05.613530    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgl2\" (UniqueName: \"kubernetes.io/projected/60bf9fdc-755d-4308-bf58-4a3d3459eddb-kube-api-access-prgl2\") pod \"kindnet-k65q9\" (UID: \"60bf9fdc-755d-4308-bf58-4a3d3459eddb\") " pod="kube-system/kindnet-k65q9"
	Dec 05 07:05:05 no-preload-008839 kubelet[2214]: E1205 07:05:05.634454    2214 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-008839" containerName="kube-scheduler"
	Dec 05 07:05:06 no-preload-008839 kubelet[2214]: I1205 07:05:06.653471    2214 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-s9zn2" podStartSLOduration=1.653456646 podStartE2EDuration="1.653456646s" podCreationTimestamp="2025-12-05 07:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:06.653361621 +0000 UTC m=+7.162466620" watchObservedRunningTime="2025-12-05 07:05:06.653456646 +0000 UTC m=+7.162561626"
	Dec 05 07:05:08 no-preload-008839 kubelet[2214]: I1205 07:05:08.656208    2214 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-k65q9" podStartSLOduration=1.7429273109999999 podStartE2EDuration="3.65618893s" podCreationTimestamp="2025-12-05 07:05:05 +0000 UTC" firstStartedPulling="2025-12-05 07:05:05.857958635 +0000 UTC m=+6.367063613" lastFinishedPulling="2025-12-05 07:05:07.77122026 +0000 UTC m=+8.280325232" observedRunningTime="2025-12-05 07:05:08.655914665 +0000 UTC m=+9.165019637" watchObservedRunningTime="2025-12-05 07:05:08.65618893 +0000 UTC m=+9.165293918"
	Dec 05 07:05:10 no-preload-008839 kubelet[2214]: E1205 07:05:10.926062    2214 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-008839" containerName="kube-apiserver"
	Dec 05 07:05:12 no-preload-008839 kubelet[2214]: E1205 07:05:12.601846    2214 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-008839" containerName="etcd"
	Dec 05 07:05:12 no-preload-008839 kubelet[2214]: E1205 07:05:12.911074    2214 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-008839" containerName="kube-controller-manager"
	Dec 05 07:05:15 no-preload-008839 kubelet[2214]: E1205 07:05:15.577767    2214 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-008839" containerName="kube-scheduler"
	Dec 05 07:05:18 no-preload-008839 kubelet[2214]: I1205 07:05:18.633734    2214 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 05 07:05:18 no-preload-008839 kubelet[2214]: I1205 07:05:18.704537    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/898995af-4e62-44f5-91b9-f7a35befdcb4-config-volume\") pod \"coredns-7d764666f9-bvbhf\" (UID: \"898995af-4e62-44f5-91b9-f7a35befdcb4\") " pod="kube-system/coredns-7d764666f9-bvbhf"
	Dec 05 07:05:18 no-preload-008839 kubelet[2214]: I1205 07:05:18.704599    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fq8f\" (UniqueName: \"kubernetes.io/projected/898995af-4e62-44f5-91b9-f7a35befdcb4-kube-api-access-9fq8f\") pod \"coredns-7d764666f9-bvbhf\" (UID: \"898995af-4e62-44f5-91b9-f7a35befdcb4\") " pod="kube-system/coredns-7d764666f9-bvbhf"
	Dec 05 07:05:18 no-preload-008839 kubelet[2214]: I1205 07:05:18.704657    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45db8452-3833-4917-a660-183d0a4bcac4-tmp\") pod \"storage-provisioner\" (UID: \"45db8452-3833-4917-a660-183d0a4bcac4\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:18 no-preload-008839 kubelet[2214]: I1205 07:05:18.704721    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkqp\" (UniqueName: \"kubernetes.io/projected/45db8452-3833-4917-a660-183d0a4bcac4-kube-api-access-8qkqp\") pod \"storage-provisioner\" (UID: \"45db8452-3833-4917-a660-183d0a4bcac4\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:19 no-preload-008839 kubelet[2214]: E1205 07:05:19.667021    2214 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bvbhf" containerName="coredns"
	Dec 05 07:05:19 no-preload-008839 kubelet[2214]: I1205 07:05:19.675267    2214 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.675250215 podStartE2EDuration="13.675250215s" podCreationTimestamp="2025-12-05 07:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:19.675092223 +0000 UTC m=+20.184197203" watchObservedRunningTime="2025-12-05 07:05:19.675250215 +0000 UTC m=+20.184355194"
	Dec 05 07:05:20 no-preload-008839 kubelet[2214]: E1205 07:05:20.669221    2214 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bvbhf" containerName="coredns"
	Dec 05 07:05:21 no-preload-008839 kubelet[2214]: E1205 07:05:21.671117    2214 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bvbhf" containerName="coredns"
	Dec 05 07:05:21 no-preload-008839 kubelet[2214]: I1205 07:05:21.864693    2214 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bvbhf" podStartSLOduration=16.864663748 podStartE2EDuration="16.864663748s" podCreationTimestamp="2025-12-05 07:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:19.683919658 +0000 UTC m=+20.193024636" watchObservedRunningTime="2025-12-05 07:05:21.864663748 +0000 UTC m=+22.373768729"
	Dec 05 07:05:21 no-preload-008839 kubelet[2214]: I1205 07:05:21.921985    2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztxp\" (UniqueName: \"kubernetes.io/projected/77583b71-31d4-4d4c-8696-58ffa671159e-kube-api-access-cztxp\") pod \"busybox\" (UID: \"77583b71-31d4-4d4c-8696-58ffa671159e\") " pod="default/busybox"
	Dec 05 07:05:23 no-preload-008839 kubelet[2214]: I1205 07:05:23.686767    2214 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.994150303 podStartE2EDuration="2.686750897s" podCreationTimestamp="2025-12-05 07:05:21 +0000 UTC" firstStartedPulling="2025-12-05 07:05:22.20129352 +0000 UTC m=+22.710398491" lastFinishedPulling="2025-12-05 07:05:22.893894125 +0000 UTC m=+23.402999085" observedRunningTime="2025-12-05 07:05:23.686573698 +0000 UTC m=+24.195678679" watchObservedRunningTime="2025-12-05 07:05:23.686750897 +0000 UTC m=+24.195855877"
	
	
	==> storage-provisioner [53ff897fba0d91e3ca7c3b82c241013502f0654a2ced4e6ec9c27df6872c53e9] <==
	I1205 07:05:19.031508       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:05:19.041549       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:05:19.041603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:05:19.044220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:19.051999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:19.052120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:05:19.052269       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-008839_bbecc969-a5d9-452c-a06d-92dc338c0068!
	I1205 07:05:19.052447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60a68084-c5d5-49bc-8273-b0880be31ea1", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-008839_bbecc969-a5d9-452c-a06d-92dc338c0068 became leader
	W1205 07:05:19.068401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:19.072062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:19.153439       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-008839_bbecc969-a5d9-452c-a06d-92dc338c0068!
	W1205 07:05:21.075177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:21.080195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:23.083844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:23.087356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:25.090174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:25.094413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:27.098405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:27.102602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:29.106730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:29.110299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:31.112978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:31.116979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-008839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.88553ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:05:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-172186 describe deploy/metrics-server -n kube-system: exit status 1 (74.004712ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-172186 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-172186
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-172186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	        "Created": "2025-12-05T07:04:58.706172169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:04:58.744766507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hosts",
	        "LogPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c-json.log",
	        "Name": "/default-k8s-diff-port-172186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-172186:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-172186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	                "LowerDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172186",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb722759592f213c4f3f7541fed9e220751bcd33b5e26bf19d1bb6f38a62b4b7",
	            "SandboxKey": "/var/run/docker/netns/fb722759592f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-172186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7252f408ef750a913b6fabe10d1ab3c2a2b877d7652581ebca03873c25ab3784",
	                    "EndpointID": "f2c6fb5aa9fda2607e649fece7a40217856a56045117b9aa0ad30e22c7c58b94",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "42:54:9f:b8:08:56",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-172186",
	                        "b4ba7170def8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25: (1.18662631s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-397607 sudo docker system info                                                                                                                                                                                                      │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                                                                                                  │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                               │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:05:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:05:18.732725  361350 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:05:18.733368  361350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:18.733384  361350 out.go:374] Setting ErrFile to fd 2...
	I1205 07:05:18.733392  361350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:18.734057  361350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:05:18.734618  361350 out.go:368] Setting JSON to false
	I1205 07:05:18.735775  361350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6463,"bootTime":1764911856,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:05:18.735858  361350 start.go:143] virtualization: kvm guest
	I1205 07:05:18.737474  361350 out.go:179] * [old-k8s-version-874709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:05:18.738484  361350 notify.go:221] Checking for updates...
	I1205 07:05:18.738505  361350 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:05:18.739542  361350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:05:18.740672  361350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:18.741792  361350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:05:18.742805  361350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:05:18.743806  361350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:05:18.745106  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:18.746724  361350 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1205 07:05:18.747697  361350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:05:18.771894  361350 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:05:18.771966  361350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:05:18.833543  361350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:05:18.822119874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:05:18.833722  361350 docker.go:319] overlay module found
	I1205 07:05:18.836020  361350 out.go:179] * Using the docker driver based on existing profile
	I1205 07:05:18.837044  361350 start.go:309] selected driver: docker
	I1205 07:05:18.837059  361350 start.go:927] validating driver "docker" against &{Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:18.837147  361350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:05:18.837918  361350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:05:18.902352  361350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:05:18.893064552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:05:18.902717  361350 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:18.902759  361350 cni.go:84] Creating CNI manager for ""
	I1205 07:05:18.902833  361350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:05:18.902885  361350 start.go:353] cluster config:
	{Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:18.904269  361350 out.go:179] * Starting "old-k8s-version-874709" primary control-plane node in "old-k8s-version-874709" cluster
	I1205 07:05:18.905376  361350 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:05:18.906871  361350 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:05:18.907897  361350 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 07:05:18.907932  361350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1205 07:05:18.907956  361350 cache.go:65] Caching tarball of preloaded images
	I1205 07:05:18.908007  361350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:05:18.908072  361350 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:05:18.908089  361350 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1205 07:05:18.908227  361350 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/config.json ...
	I1205 07:05:18.929964  361350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:05:18.929984  361350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:05:18.929998  361350 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:05:18.930029  361350 start.go:360] acquireMachinesLock for old-k8s-version-874709: {Name:mk958e6ec1b48ba175b34133d850223c6d6a6548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:05:18.930090  361350 start.go:364] duration metric: took 40.527µs to acquireMachinesLock for "old-k8s-version-874709"
	I1205 07:05:18.930112  361350 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:05:18.930120  361350 fix.go:54] fixHost starting: 
	I1205 07:05:18.930395  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:18.946452  361350 fix.go:112] recreateIfNeeded on old-k8s-version-874709: state=Stopped err=<nil>
	W1205 07:05:18.946475  361350 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:05:16.560664  343486 node_ready.go:57] node "no-preload-008839" has "Ready":"False" status (will retry)
	W1205 07:05:18.561164  343486 node_ready.go:57] node "no-preload-008839" has "Ready":"False" status (will retry)
	I1205 07:05:19.070821  343486 node_ready.go:49] node "no-preload-008839" is "Ready"
	I1205 07:05:19.070909  343486 node_ready.go:38] duration metric: took 13.013062292s for node "no-preload-008839" to be "Ready" ...
	I1205 07:05:19.070954  343486 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:05:19.071049  343486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:05:19.083748  343486 api_server.go:72] duration metric: took 13.322120804s to wait for apiserver process to appear ...
	I1205 07:05:19.083809  343486 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:05:19.083825  343486 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:05:19.088106  343486 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:05:19.089182  343486 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:05:19.089208  343486 api_server.go:131] duration metric: took 5.392067ms to wait for apiserver health ...
	I1205 07:05:19.089219  343486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:19.092755  343486 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:19.092782  343486 system_pods.go:61] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.092789  343486 system_pods.go:61] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.092795  343486 system_pods.go:61] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.092799  343486 system_pods.go:61] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.092803  343486 system_pods.go:61] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.092805  343486 system_pods.go:61] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.092808  343486 system_pods.go:61] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.092813  343486 system_pods.go:61] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.092818  343486 system_pods.go:74] duration metric: took 3.593925ms to wait for pod list to return data ...
	I1205 07:05:19.092824  343486 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:19.095219  343486 default_sa.go:45] found service account: "default"
	I1205 07:05:19.095276  343486 default_sa.go:55] duration metric: took 2.41461ms for default service account to be created ...
	I1205 07:05:19.095292  343486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:19.097894  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.097926  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.097934  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.097943  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.097954  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.097960  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.097965  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.097971  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.097979  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.098006  343486 retry.go:31] will retry after 187.583527ms: missing components: kube-dns
	I1205 07:05:19.289459  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.289489  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.289495  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.289501  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.289505  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.289509  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.289514  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.289518  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.289523  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.289540  343486 retry.go:31] will retry after 293.191566ms: missing components: kube-dns
	I1205 07:05:19.586140  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:19.586170  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:19.586177  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:19.586182  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:19.586186  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:19.586191  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:19.586194  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:19.586198  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:19.586202  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:19.586216  343486 retry.go:31] will retry after 446.49776ms: missing components: kube-dns
	I1205 07:05:20.037104  343486 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:20.037133  343486 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running
	I1205 07:05:20.037140  343486 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running
	I1205 07:05:20.037144  343486 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running
	I1205 07:05:20.037148  343486 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running
	I1205 07:05:20.037152  343486 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running
	I1205 07:05:20.037155  343486 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running
	I1205 07:05:20.037158  343486 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running
	I1205 07:05:20.037161  343486 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running
	I1205 07:05:20.037167  343486 system_pods.go:126] duration metric: took 941.870246ms to wait for k8s-apps to be running ...
	I1205 07:05:20.037174  343486 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:20.037213  343486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:20.050812  343486 system_svc.go:56] duration metric: took 13.630899ms WaitForService to wait for kubelet
	I1205 07:05:20.050836  343486 kubeadm.go:587] duration metric: took 14.289211279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:20.050852  343486 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:20.053182  343486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:20.053206  343486 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:20.053228  343486 node_conditions.go:105] duration metric: took 2.370253ms to run NodePressure ...
	I1205 07:05:20.053242  343486 start.go:242] waiting for startup goroutines ...
	I1205 07:05:20.053255  343486 start.go:247] waiting for cluster config update ...
	I1205 07:05:20.053272  343486 start.go:256] writing updated cluster config ...
	I1205 07:05:20.053567  343486 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:20.057994  343486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:20.061040  343486 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.065075  343486 pod_ready.go:94] pod "coredns-7d764666f9-bvbhf" is "Ready"
	I1205 07:05:20.065091  343486 pod_ready.go:86] duration metric: took 4.033756ms for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.066866  343486 pod_ready.go:83] waiting for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.072209  343486 pod_ready.go:94] pod "etcd-no-preload-008839" is "Ready"
	I1205 07:05:20.072224  343486 pod_ready.go:86] duration metric: took 5.344359ms for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.073802  343486 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.076905  343486 pod_ready.go:94] pod "kube-apiserver-no-preload-008839" is "Ready"
	I1205 07:05:20.076918  343486 pod_ready.go:86] duration metric: took 3.100692ms for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.078571  343486 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.461552  343486 pod_ready.go:94] pod "kube-controller-manager-no-preload-008839" is "Ready"
	I1205 07:05:20.461587  343486 pod_ready.go:86] duration metric: took 382.993872ms for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:20.662226  343486 pod_ready.go:83] waiting for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.061491  343486 pod_ready.go:94] pod "kube-proxy-s9zn2" is "Ready"
	I1205 07:05:21.061522  343486 pod_ready.go:86] duration metric: took 399.270874ms for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.262993  343486 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.662454  343486 pod_ready.go:94] pod "kube-scheduler-no-preload-008839" is "Ready"
	I1205 07:05:21.662476  343486 pod_ready.go:86] duration metric: took 399.457643ms for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:21.662488  343486 pod_ready.go:40] duration metric: took 1.604463442s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:21.709168  343486 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:05:21.710903  343486 out.go:179] * Done! kubectl is now configured to use "no-preload-008839" cluster and "default" namespace by default
	W1205 07:05:18.308653  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:20.808836  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	I1205 07:05:18.809545  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:19.309502  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:19.808972  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:20.309569  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:20.809444  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.309261  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.808758  355650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:05:21.887928  355650 kubeadm.go:1114] duration metric: took 5.16553977s to wait for elevateKubeSystemPrivileges
	I1205 07:05:21.887963  355650 kubeadm.go:403] duration metric: took 18.370040269s to StartCluster
	I1205 07:05:21.887978  355650 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:21.888036  355650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:21.889657  355650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:21.889879  355650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:05:21.889898  355650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:05:21.889944  355650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:05:21.890067  355650 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-172186"
	I1205 07:05:21.890077  355650 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:05:21.890086  355650 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-172186"
	I1205 07:05:21.890157  355650 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:05:21.890110  355650 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-172186"
	I1205 07:05:21.890200  355650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-172186"
	I1205 07:05:21.890581  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.890735  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.891115  355650 out.go:179] * Verifying Kubernetes components...
	I1205 07:05:21.892428  355650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:21.914951  355650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:05:21.916220  355650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:21.916252  355650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:05:21.916384  355650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:05:21.916938  355650 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-172186"
	I1205 07:05:21.917045  355650 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:05:21.917530  355650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:05:21.946096  355650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:21.946122  355650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:05:21.946195  355650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:05:21.947726  355650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:05:21.968188  355650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:05:21.988791  355650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:05:22.046493  355650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:22.065706  355650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:22.081514  355650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:22.161085  355650 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1205 07:05:22.162807  355650 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:05:22.359441  355650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 07:05:22.360384  355650 addons.go:530] duration metric: took 470.438863ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 07:05:22.665477  355650 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-172186" context rescaled to 1 replicas
	I1205 07:05:18.947912  361350 out.go:252] * Restarting existing docker container for "old-k8s-version-874709" ...
	I1205 07:05:18.947975  361350 cli_runner.go:164] Run: docker start old-k8s-version-874709
	I1205 07:05:19.216592  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:19.236196  361350 kic.go:430] container "old-k8s-version-874709" state is running.
	I1205 07:05:19.236585  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:19.254644  361350 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/config.json ...
	I1205 07:05:19.254833  361350 machine.go:94] provisionDockerMachine start ...
	I1205 07:05:19.254892  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:19.273302  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:19.273572  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:19.273587  361350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:05:19.274189  361350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40708->127.0.0.1:33113: read: connection reset by peer
	I1205 07:05:22.421014  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-874709
	
	I1205 07:05:22.421044  361350 ubuntu.go:182] provisioning hostname "old-k8s-version-874709"
	I1205 07:05:22.421104  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.439815  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.440029  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.440045  361350 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-874709 && echo "old-k8s-version-874709" | sudo tee /etc/hostname
	I1205 07:05:22.588524  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-874709
	
	I1205 07:05:22.588613  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.607348  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.607657  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.607686  361350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-874709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-874709/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-874709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:05:22.746318  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:05:22.746359  361350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:05:22.746419  361350 ubuntu.go:190] setting up certificates
	I1205 07:05:22.746433  361350 provision.go:84] configureAuth start
	I1205 07:05:22.746497  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:22.769297  361350 provision.go:143] copyHostCerts
	I1205 07:05:22.769433  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:05:22.769446  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:05:22.769527  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:05:22.769673  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:05:22.769682  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:05:22.769736  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:05:22.769834  361350 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:05:22.769841  361350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:05:22.769878  361350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:05:22.769964  361350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-874709 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-874709]
	I1205 07:05:22.798244  361350 provision.go:177] copyRemoteCerts
	I1205 07:05:22.798335  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:05:22.798385  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.818892  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:22.922010  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:05:22.941008  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 07:05:22.957532  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:05:22.974225  361350 provision.go:87] duration metric: took 227.777325ms to configureAuth
	I1205 07:05:22.974248  361350 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:05:22.974425  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:22.974533  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:22.991889  361350 main.go:143] libmachine: Using SSH client type: native
	I1205 07:05:22.992114  361350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1205 07:05:22.992136  361350 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:05:23.303830  361350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:05:23.303857  361350 machine.go:97] duration metric: took 4.049011499s to provisionDockerMachine
	I1205 07:05:23.303870  361350 start.go:293] postStartSetup for "old-k8s-version-874709" (driver="docker")
	I1205 07:05:23.303884  361350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:05:23.303945  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:05:23.303992  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.323474  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.420939  361350 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:05:23.424400  361350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:05:23.424425  361350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:05:23.424434  361350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:05:23.424475  361350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:05:23.424544  361350 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:05:23.424647  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:05:23.432359  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:05:23.450500  361350 start.go:296] duration metric: took 146.61716ms for postStartSetup
	I1205 07:05:23.450566  361350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:05:23.450600  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.469665  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.564097  361350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:05:23.568533  361350 fix.go:56] duration metric: took 4.638408012s for fixHost
	I1205 07:05:23.568561  361350 start.go:83] releasing machines lock for "old-k8s-version-874709", held for 4.6384531s
	I1205 07:05:23.568621  361350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-874709
	I1205 07:05:23.586088  361350 ssh_runner.go:195] Run: cat /version.json
	I1205 07:05:23.586152  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.586186  361350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:05:23.586292  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:23.604172  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.605418  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:23.755814  361350 ssh_runner.go:195] Run: systemctl --version
	I1205 07:05:23.762465  361350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:05:23.794667  361350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:05:23.799124  361350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:05:23.799182  361350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:05:23.807255  361350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:05:23.807272  361350 start.go:496] detecting cgroup driver to use...
	I1205 07:05:23.807297  361350 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:05:23.807348  361350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:05:23.822039  361350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:05:23.834011  361350 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:05:23.834057  361350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:05:23.847299  361350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:05:23.859584  361350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:05:23.938418  361350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:05:24.013904  361350 docker.go:234] disabling docker service ...
	I1205 07:05:24.013969  361350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:05:24.028379  361350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:05:24.039619  361350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:05:24.114233  361350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:05:24.199092  361350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:05:24.211045  361350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:05:24.224645  361350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 07:05:24.224694  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.233803  361350 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:05:24.233849  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.241986  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.250089  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.258031  361350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:05:24.265526  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.273467  361350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.281040  361350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:05:24.288986  361350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:05:24.295577  361350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:05:24.302149  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:24.381683  361350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:05:24.513406  361350 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:05:24.513468  361350 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:05:24.517306  361350 start.go:564] Will wait 60s for crictl version
	I1205 07:05:24.517376  361350 ssh_runner.go:195] Run: which crictl
	I1205 07:05:24.521188  361350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:05:24.546369  361350 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:05:24.546458  361350 ssh_runner.go:195] Run: crio --version
	I1205 07:05:24.573084  361350 ssh_runner.go:195] Run: crio --version
	I1205 07:05:24.600432  361350 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1205 07:05:24.601389  361350 cli_runner.go:164] Run: docker network inspect old-k8s-version-874709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:05:24.618878  361350 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:05:24.622703  361350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:05:24.633060  361350 kubeadm.go:884] updating cluster {Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:05:24.633187  361350 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 07:05:24.633224  361350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:05:24.666471  361350 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:05:24.666489  361350 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:05:24.666530  361350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:05:24.691979  361350 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:05:24.691997  361350 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:05:24.692003  361350 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1205 07:05:24.692100  361350 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-874709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:05:24.692154  361350 ssh_runner.go:195] Run: crio config
	I1205 07:05:24.736227  361350 cni.go:84] Creating CNI manager for ""
	I1205 07:05:24.736246  361350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:05:24.736257  361350 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:05:24.736296  361350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-874709 NodeName:old-k8s-version-874709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:05:24.736499  361350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-874709"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:05:24.736582  361350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1205 07:05:24.744933  361350 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:05:24.744990  361350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:05:24.752803  361350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1205 07:05:24.765404  361350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:05:24.777441  361350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1205 07:05:24.788985  361350 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:05:24.792622  361350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:05:24.801914  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:24.881903  361350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:24.904484  361350 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709 for IP: 192.168.103.2
	I1205 07:05:24.904501  361350 certs.go:195] generating shared ca certs ...
	I1205 07:05:24.904516  361350 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:24.904644  361350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:05:24.904702  361350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:05:24.904714  361350 certs.go:257] generating profile certs ...
	I1205 07:05:24.904820  361350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/client.key
	I1205 07:05:24.904873  361350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.key.8f229178
	I1205 07:05:24.904914  361350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.key
	I1205 07:05:24.905017  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:05:24.905052  361350 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:05:24.905062  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:05:24.905090  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:05:24.905113  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:05:24.905138  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:05:24.905177  361350 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:05:24.905830  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:05:24.923443  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:05:24.941714  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:05:24.959879  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:05:24.980027  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 07:05:25.000656  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:05:25.017112  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:05:25.033266  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/old-k8s-version-874709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:05:25.050195  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:05:25.067094  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:05:25.083486  361350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:05:25.100826  361350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:05:25.113370  361350 ssh_runner.go:195] Run: openssl version
	I1205 07:05:25.119213  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.125963  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:05:25.132814  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.136109  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.136143  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:05:25.172312  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:05:25.179748  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.186716  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:05:25.193521  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.196840  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.196881  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:05:25.231668  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:05:25.238492  361350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.245470  361350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:05:25.253450  361350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.256888  361350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.256923  361350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:05:25.293464  361350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:05:25.300763  361350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:05:25.304313  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:05:25.338602  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:05:25.373006  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:05:25.416283  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:05:25.460378  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:05:25.499869  361350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:05:25.556132  361350 kubeadm.go:401] StartCluster: {Name:old-k8s-version-874709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874709 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:05:25.556237  361350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:05:25.556309  361350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:05:25.591064  361350 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:05:25.591095  361350 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:05:25.591120  361350 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:05:25.591125  361350 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:05:25.591130  361350 cri.go:89] found id: ""
	I1205 07:05:25.591174  361350 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:05:25.603289  361350 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:05:25Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:05:25.603376  361350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:05:25.611519  361350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:05:25.611537  361350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:05:25.611591  361350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:05:25.619110  361350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:05:25.620504  361350 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-874709" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:25.621490  361350 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-874709" cluster setting kubeconfig missing "old-k8s-version-874709" context setting]
	I1205 07:05:25.622478  361350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.624299  361350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:05:25.631529  361350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:05:25.631560  361350 kubeadm.go:602] duration metric: took 20.012895ms to restartPrimaryControlPlane
	I1205 07:05:25.631567  361350 kubeadm.go:403] duration metric: took 75.4473ms to StartCluster
	I1205 07:05:25.631579  361350 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.631630  361350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:05:25.633577  361350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:05:25.633800  361350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:05:25.633873  361350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:05:25.633957  361350 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-874709"
	I1205 07:05:25.633986  361350 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-874709"
	W1205 07:05:25.633998  361350 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:05:25.634027  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.634090  361350 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:05:25.634226  361350 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-874709"
	I1205 07:05:25.634255  361350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-874709"
	I1205 07:05:25.634617  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.634647  361350 addons.go:70] Setting dashboard=true in profile "old-k8s-version-874709"
	I1205 07:05:25.634668  361350 addons.go:239] Setting addon dashboard=true in "old-k8s-version-874709"
	W1205 07:05:25.634677  361350 addons.go:248] addon dashboard should already be in state true
	I1205 07:05:25.634790  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.634622  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.635384  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.635926  361350 out.go:179] * Verifying Kubernetes components...
	I1205 07:05:25.637108  361350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:05:25.662647  361350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:05:25.662647  361350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:05:25.662907  361350 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-874709"
	W1205 07:05:25.662924  361350 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:05:25.662945  361350 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:05:25.663490  361350 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:05:25.663880  361350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:25.663897  361350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:05:25.663943  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.664835  361350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:05:22.808920  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:25.308649  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	I1205 07:05:25.668192  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:05:25.668210  361350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:05:25.668276  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.698069  361350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:25.698138  361350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:05:25.698199  361350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:05:25.703843  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.706842  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.721133  361350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:05:25.806050  361350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:05:25.823463  361350 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-874709" to be "Ready" ...
	I1205 07:05:25.824054  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:05:25.824073  361350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:05:25.826097  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:05:25.838375  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:05:25.838587  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:05:25.838605  361350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:05:25.855172  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:05:25.855191  361350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:05:25.871286  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:05:25.871304  361350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:05:25.889076  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:05:25.889095  361350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:05:25.905311  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:05:25.905354  361350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:05:25.921271  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:05:25.921293  361350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:05:25.935244  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:05:25.935266  361350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:05:25.947955  361350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:05:25.947970  361350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:05:25.960865  361350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:05:27.605896  361350 node_ready.go:49] node "old-k8s-version-874709" is "Ready"
	I1205 07:05:27.605933  361350 node_ready.go:38] duration metric: took 1.782430557s for node "old-k8s-version-874709" to be "Ready" ...
	I1205 07:05:27.605949  361350 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:05:27.605999  361350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:05:28.232682  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.406554683s)
	I1205 07:05:28.232743  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.394340724s)
	I1205 07:05:28.557259  361350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.596325785s)
	I1205 07:05:28.557292  361350 api_server.go:72] duration metric: took 2.923453611s to wait for apiserver process to appear ...
	I1205 07:05:28.557313  361350 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:05:28.557355  361350 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:05:28.558583  361350 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-874709 addons enable metrics-server
	
	I1205 07:05:28.559814  361350 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1205 07:05:24.166260  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	W1205 07:05:26.665821  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	I1205 07:05:28.561201  361350 addons.go:530] duration metric: took 2.927329498s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1205 07:05:28.563129  361350 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 07:05:28.563152  361350 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 07:05:27.309073  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:29.808110  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:31.809199  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:28.666156  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	W1205 07:05:30.666908  355650 node_ready.go:57] node "default-k8s-diff-port-172186" has "Ready":"False" status (will retry)
	I1205 07:05:32.166102  355650 node_ready.go:49] node "default-k8s-diff-port-172186" is "Ready"
	I1205 07:05:32.166129  355650 node_ready.go:38] duration metric: took 10.003291279s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:05:32.166143  355650 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:05:32.166195  355650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:05:32.179151  355650 api_server.go:72] duration metric: took 10.289220283s to wait for apiserver process to appear ...
	I1205 07:05:32.179175  355650 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:05:32.179195  355650 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:05:32.183449  355650 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1205 07:05:32.184980  355650 api_server.go:141] control plane version: v1.34.2
	I1205 07:05:32.185012  355650 api_server.go:131] duration metric: took 5.829092ms to wait for apiserver health ...
	I1205 07:05:32.185022  355650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:32.188316  355650 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:32.188371  355650 system_pods.go:61] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:32.188379  355650 system_pods.go:61] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running
	I1205 07:05:32.188388  355650 system_pods.go:61] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:05:32.188395  355650 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running
	I1205 07:05:32.188405  355650 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running
	I1205 07:05:32.188414  355650 system_pods.go:61] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:05:32.188419  355650 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running
	I1205 07:05:32.188429  355650 system_pods.go:61] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:32.188437  355650 system_pods.go:74] duration metric: took 3.40816ms to wait for pod list to return data ...
	I1205 07:05:32.188450  355650 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:32.190781  355650 default_sa.go:45] found service account: "default"
	I1205 07:05:32.190811  355650 default_sa.go:55] duration metric: took 2.348757ms for default service account to be created ...
	I1205 07:05:32.190819  355650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:32.193136  355650 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:32.193157  355650 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:32.193163  355650 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running
	I1205 07:05:32.193169  355650 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:05:32.193173  355650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running
	I1205 07:05:32.193176  355650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running
	I1205 07:05:32.193179  355650 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:05:32.193184  355650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running
	I1205 07:05:32.193192  355650 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:32.193207  355650 retry.go:31] will retry after 217.387062ms: missing components: kube-dns
	I1205 07:05:32.414522  355650 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:32.414560  355650 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:32.414567  355650 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running
	I1205 07:05:32.414572  355650 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:05:32.414576  355650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running
	I1205 07:05:32.414579  355650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running
	I1205 07:05:32.414583  355650 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:05:32.414586  355650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running
	I1205 07:05:32.414594  355650 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:32.414610  355650 retry.go:31] will retry after 265.457619ms: missing components: kube-dns
	I1205 07:05:32.684446  355650 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:32.684481  355650 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:32.684489  355650 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running
	I1205 07:05:32.684497  355650 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:05:32.684503  355650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running
	I1205 07:05:32.684508  355650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running
	I1205 07:05:32.684512  355650 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:05:32.684516  355650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running
	I1205 07:05:32.684523  355650 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:32.684545  355650 retry.go:31] will retry after 424.844776ms: missing components: kube-dns
	I1205 07:05:33.113452  355650 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:33.113487  355650 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Running
	I1205 07:05:33.113493  355650 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running
	I1205 07:05:33.113499  355650 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:05:33.113506  355650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running
	I1205 07:05:33.113510  355650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running
	I1205 07:05:33.113514  355650 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:05:33.113517  355650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running
	I1205 07:05:33.113520  355650 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Running
	I1205 07:05:33.113526  355650 system_pods.go:126] duration metric: took 922.70274ms to wait for k8s-apps to be running ...
	I1205 07:05:33.113534  355650 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:33.113577  355650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:33.126612  355650 system_svc.go:56] duration metric: took 13.070407ms WaitForService to wait for kubelet
	I1205 07:05:33.126637  355650 kubeadm.go:587] duration metric: took 11.236711124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:33.126655  355650 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:33.128978  355650 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:33.128999  355650 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:33.129014  355650 node_conditions.go:105] duration metric: took 2.35429ms to run NodePressure ...
	I1205 07:05:33.129024  355650 start.go:242] waiting for startup goroutines ...
	I1205 07:05:33.129032  355650 start.go:247] waiting for cluster config update ...
	I1205 07:05:33.129045  355650 start.go:256] writing updated cluster config ...
	I1205 07:05:33.129264  355650 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:33.132870  355650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:33.136126  355650 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.139785  355650 pod_ready.go:94] pod "coredns-66bc5c9577-lzlm8" is "Ready"
	I1205 07:05:33.139803  355650 pod_ready.go:86] duration metric: took 3.657958ms for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.141702  355650 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.144949  355650 pod_ready.go:94] pod "etcd-default-k8s-diff-port-172186" is "Ready"
	I1205 07:05:33.144967  355650 pod_ready.go:86] duration metric: took 3.248606ms for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.146693  355650 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.149935  355650 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-172186" is "Ready"
	I1205 07:05:33.149950  355650 pod_ready.go:86] duration metric: took 3.240224ms for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.151576  355650 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:33.536865  355650 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-172186" is "Ready"
	I1205 07:05:33.536892  355650 pod_ready.go:86] duration metric: took 385.298555ms for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:29.057817  361350 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:05:29.061673  361350 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:05:29.062733  361350 api_server.go:141] control plane version: v1.28.0
	I1205 07:05:29.062756  361350 api_server.go:131] duration metric: took 505.437617ms to wait for apiserver health ...
	I1205 07:05:29.062765  361350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:29.065963  361350 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:29.065997  361350 system_pods.go:61] "coredns-5dd5756b68-srvvk" [adfb4a20-1e05-4379-89b3-ed0b9a5a4b73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:29.066006  361350 system_pods.go:61] "etcd-old-k8s-version-874709" [f0e9184f-59ea-49f0-b002-a3534a064aa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:29.066013  361350 system_pods.go:61] "kindnet-f9lmb" [ddfb2078-ed57-42bc-9f8a-448f7a54e6d4] Running
	I1205 07:05:29.066022  361350 system_pods.go:61] "kube-apiserver-old-k8s-version-874709" [4d09ade5-3e09-4ab6-98e6-31fd44e495e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:29.066031  361350 system_pods.go:61] "kube-controller-manager-old-k8s-version-874709" [491a8479-f2a8-44ef-bb32-77b8aa276e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:29.066041  361350 system_pods.go:61] "kube-proxy-98jls" [2e48ecb2-f73b-4f7e-a021-0e33d12ef572] Running
	I1205 07:05:29.066051  361350 system_pods.go:61] "kube-scheduler-old-k8s-version-874709" [00a11872-3aba-49ab-8866-4536f1a6bad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:29.066060  361350 system_pods.go:61] "storage-provisioner" [c0d7103d-17fc-479f-8958-66bb01a59f8b] Running
	I1205 07:05:29.066068  361350 system_pods.go:74] duration metric: took 3.297887ms to wait for pod list to return data ...
	I1205 07:05:29.066077  361350 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:29.067855  361350 default_sa.go:45] found service account: "default"
	I1205 07:05:29.067876  361350 default_sa.go:55] duration metric: took 1.790371ms for default service account to be created ...
	I1205 07:05:29.067885  361350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:29.070455  361350 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:29.070482  361350 system_pods.go:89] "coredns-5dd5756b68-srvvk" [adfb4a20-1e05-4379-89b3-ed0b9a5a4b73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:29.070497  361350 system_pods.go:89] "etcd-old-k8s-version-874709" [f0e9184f-59ea-49f0-b002-a3534a064aa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:29.070507  361350 system_pods.go:89] "kindnet-f9lmb" [ddfb2078-ed57-42bc-9f8a-448f7a54e6d4] Running
	I1205 07:05:29.070514  361350 system_pods.go:89] "kube-apiserver-old-k8s-version-874709" [4d09ade5-3e09-4ab6-98e6-31fd44e495e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:29.070519  361350 system_pods.go:89] "kube-controller-manager-old-k8s-version-874709" [491a8479-f2a8-44ef-bb32-77b8aa276e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:29.070525  361350 system_pods.go:89] "kube-proxy-98jls" [2e48ecb2-f73b-4f7e-a021-0e33d12ef572] Running
	I1205 07:05:29.070530  361350 system_pods.go:89] "kube-scheduler-old-k8s-version-874709" [00a11872-3aba-49ab-8866-4536f1a6bad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:29.070533  361350 system_pods.go:89] "storage-provisioner" [c0d7103d-17fc-479f-8958-66bb01a59f8b] Running
	I1205 07:05:29.070542  361350 system_pods.go:126] duration metric: took 2.652164ms to wait for k8s-apps to be running ...
	I1205 07:05:29.070550  361350 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:29.070600  361350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:29.083618  361350 system_svc.go:56] duration metric: took 13.0633ms WaitForService to wait for kubelet
	I1205 07:05:29.083640  361350 kubeadm.go:587] duration metric: took 3.449805421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:29.083666  361350 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:29.085494  361350 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:29.085515  361350 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:29.085531  361350 node_conditions.go:105] duration metric: took 1.859859ms to run NodePressure ...
	I1205 07:05:29.085545  361350 start.go:242] waiting for startup goroutines ...
	I1205 07:05:29.085558  361350 start.go:247] waiting for cluster config update ...
	I1205 07:05:29.085573  361350 start.go:256] writing updated cluster config ...
	I1205 07:05:29.085819  361350 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:29.089246  361350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:29.092650  361350 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-srvvk" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:05:31.098071  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:05:33.098727  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	I1205 07:05:33.736601  355650 pod_ready.go:83] waiting for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:34.137126  355650 pod_ready.go:94] pod "kube-proxy-fpss6" is "Ready"
	I1205 07:05:34.137150  355650 pod_ready.go:86] duration metric: took 400.526065ms for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:34.337169  355650 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:34.737521  355650 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-172186" is "Ready"
	I1205 07:05:34.737545  355650 pod_ready.go:86] duration metric: took 400.352032ms for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:34.737558  355650 pod_ready.go:40] duration metric: took 1.604659071s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:34.781172  355650 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:05:34.783038  355650 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-172186" cluster and "default" namespace by default
	W1205 07:05:34.308271  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:36.308778  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:35.598182  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:05:38.097411  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:05:38.309120  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	W1205 07:05:40.808806  350525 node_ready.go:57] node "embed-certs-770390" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 05 07:05:32 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:32.478005675Z" level=info msg="Starting container: 5897f20b0738ad5ac9d7f5847fa90a22009f4b80bc9123e01057eb35f93b89c3" id=ea4f012f-c3d1-4e5e-b0b8-c03304df957e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:32 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:32.479760447Z" level=info msg="Started container" PID=1891 containerID=5897f20b0738ad5ac9d7f5847fa90a22009f4b80bc9123e01057eb35f93b89c3 description=kube-system/coredns-66bc5c9577-lzlm8/coredns id=ea4f012f-c3d1-4e5e-b0b8-c03304df957e name=/runtime.v1.RuntimeService/StartContainer sandboxID=93954969f5fdfa51fc38c2c731b9a9fc05a83e72d6d81039cc94bef54c2b6c61
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.243557444Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4eb39a3e-4c05-489b-ab4c-50d735ebbb28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.243622028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.248018297Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2af25ec03f166abab332642d303a1a48c07efe229c953e47df27294e5279f0a2 UID:17b6c1ea-a6af-43b5-91c4-189bf0265bc6 NetNS:/var/run/netns/400f9d5b-2f22-4356-9822-6d1662c27f4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128640}] Aliases:map[]}"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.248047302Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.257265203Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2af25ec03f166abab332642d303a1a48c07efe229c953e47df27294e5279f0a2 UID:17b6c1ea-a6af-43b5-91c4-189bf0265bc6 NetNS:/var/run/netns/400f9d5b-2f22-4356-9822-6d1662c27f4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128640}] Aliases:map[]}"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.257405972Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.258088287Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.25884808Z" level=info msg="Ran pod sandbox 2af25ec03f166abab332642d303a1a48c07efe229c953e47df27294e5279f0a2 with infra container: default/busybox/POD" id=4eb39a3e-4c05-489b-ab4c-50d735ebbb28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.259844594Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=379f966d-29cf-4c78-8c47-2057dd3d7a91 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.259977351Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=379f966d-29cf-4c78-8c47-2057dd3d7a91 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.260038349Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=379f966d-29cf-4c78-8c47-2057dd3d7a91 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.260791051Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0113511a-0098-4ff4-b47c-1ec20de94126 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.262308364Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.951467101Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0113511a-0098-4ff4-b47c-1ec20de94126 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.95218085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f370e56-1a9e-4858-865a-2b51a4a46b32 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.953435213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a723a3f5-1c3b-4d86-8ccf-346fee3936fd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.95625905Z" level=info msg="Creating container: default/busybox/busybox" id=2cdb67d3-1b0f-427d-ae55-4541704a24e2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.956395813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.960670265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.961131462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.996078987Z" level=info msg="Created container 2cca129c0851e1a14d462e81c853c45c1709901815e5d113084a0f416b4ecf95: default/busybox/busybox" id=2cdb67d3-1b0f-427d-ae55-4541704a24e2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.996574292Z" level=info msg="Starting container: 2cca129c0851e1a14d462e81c853c45c1709901815e5d113084a0f416b4ecf95" id=de600da6-8184-4b5e-b9bd-e75f0216d210 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:35 default-k8s-diff-port-172186 crio[772]: time="2025-12-05T07:05:35.998353364Z" level=info msg="Started container" PID=1967 containerID=2cca129c0851e1a14d462e81c853c45c1709901815e5d113084a0f416b4ecf95 description=default/busybox/busybox id=de600da6-8184-4b5e-b9bd-e75f0216d210 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2af25ec03f166abab332642d303a1a48c07efe229c953e47df27294e5279f0a2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2cca129c0851e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   2af25ec03f166       busybox                                                default
	5897f20b0738a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   93954969f5fdf       coredns-66bc5c9577-lzlm8                               kube-system
	179686bb23fd3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   e7768f5488a57       storage-provisioner                                    kube-system
	5b00bf335f94c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   c5e070cce408b       kube-proxy-fpss6                                       kube-system
	85b16e43c02a2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   19aaa70cff455       kindnet-w2mzg                                          kube-system
	4314a9f2639d7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      32 seconds ago      Running             kube-apiserver            0                   81d640dada376       kube-apiserver-default-k8s-diff-port-172186            kube-system
	c0e02d65279f8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      32 seconds ago      Running             kube-controller-manager   0                   7f51d78d85fb3       kube-controller-manager-default-k8s-diff-port-172186   kube-system
	c02155134863b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      32 seconds ago      Running             etcd                      0                   06da9104862d9       etcd-default-k8s-diff-port-172186                      kube-system
	c0cb8e7d7f582       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      32 seconds ago      Running             kube-scheduler            0                   6b3218ea6a014       kube-scheduler-default-k8s-diff-port-172186            kube-system
	
	
	==> coredns [5897f20b0738ad5ac9d7f5847fa90a22009f4b80bc9123e01057eb35f93b89c3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59530 - 51321 "HINFO IN 8715916798448912152.3322983420847396236. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.050799603s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-172186
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-172186
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=default-k8s-diff-port-172186
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-172186
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:05:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:05:32 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:05:32 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:05:32 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:05:32 +0000   Fri, 05 Dec 2025 07:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-172186
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                0c6d18bf-2e40-435b-9be8-d014e737e08c
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-lzlm8                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-default-k8s-diff-port-172186                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-w2mzg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-172186             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-172186    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-fpss6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-172186             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node default-k8s-diff-port-172186 event: Registered Node default-k8s-diff-port-172186 in Controller
	  Normal  NodeReady                11s   kubelet          Node default-k8s-diff-port-172186 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [c02155134863b0d2601eb013c5a1e1ee3b10e251f098cde717653565cc2a50a5] <==
	{"level":"warn","ts":"2025-12-05T07:05:12.592572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.602033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.609211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.616829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.624124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.630349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.636628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.642921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.650169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.659292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.667271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.673718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.679999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.686072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.693039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.700819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.707122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.713120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.719989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.726471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.732374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.752276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.758212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.764566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:12.815689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:05:43 up  1:48,  0 user,  load average: 4.38, 3.36, 2.20
	Linux default-k8s-diff-port-172186 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85b16e43c02a2bd454a964fc4961bfdaa9a268144e1cf47d965df162bb2b5856] <==
	I1205 07:05:21.615870       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:21.616158       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1205 07:05:21.616311       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:21.616343       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:21.616364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:21.820667       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:21.820901       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:21.820937       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:21.912126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:22.321039       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:22.321065       1 metrics.go:72] Registering metrics
	I1205 07:05:22.321138       1 controller.go:711] "Syncing nftables rules"
	I1205 07:05:31.820423       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:05:31.820520       1 main.go:301] handling current node
	I1205 07:05:41.823499       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:05:41.823566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4314a9f2639d7aaad1cd2e17ba039b37f6a38db1ae83531cfd1655434aef955b] <==
	I1205 07:05:13.343545       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:05:13.343581       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1205 07:05:13.345975       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:13.346040       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1205 07:05:13.349913       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:13.350143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:05:13.507714       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:14.232431       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 07:05:14.235909       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 07:05:14.235926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:05:14.653404       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:14.687785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:14.813572       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 07:05:14.818925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1205 07:05:14.819881       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:05:14.823481       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:05:15.265156       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:05:15.893307       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:05:15.901074       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 07:05:15.908024       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 07:05:20.268642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:05:21.066897       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1205 07:05:21.167312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:21.171302       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1205 07:05:42.046517       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:51092: use of closed network connection
	
	
	==> kube-controller-manager [c0e02d65279f803e6e55b207e4ea4ddaa60059616f473cca401791ba3553c923] <==
	I1205 07:05:20.264494       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:05:20.264527       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 07:05:20.265382       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 07:05:20.265408       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 07:05:20.265417       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:05:20.265452       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:05:20.265531       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 07:05:20.265754       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:05:20.265758       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 07:05:20.265885       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 07:05:20.267924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:05:20.268001       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:05:20.269790       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 07:05:20.269826       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:05:20.269844       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 07:05:20.269857       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 07:05:20.269904       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 07:05:20.269911       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 07:05:20.269917       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 07:05:20.274425       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:05:20.276772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-172186" podCIDRs=["10.244.0.0/24"]
	I1205 07:05:20.280953       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 07:05:20.283636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:05:20.285789       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:05:35.217631       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5b00bf335f94c2fa6a9c3b0a436aa7214622ab0d35dd0b82dd7ce9b46089f456] <==
	I1205 07:05:21.484035       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:05:21.563126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:05:21.664045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:05:21.664082       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1205 07:05:21.664192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:05:21.684315       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:21.684386       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:05:21.689659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:05:21.690050       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:05:21.690085       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:21.691318       1 config.go:200] "Starting service config controller"
	I1205 07:05:21.691378       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:05:21.691440       1 config.go:309] "Starting node config controller"
	I1205 07:05:21.691455       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:05:21.691489       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:05:21.691499       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:05:21.691518       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:05:21.691527       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:05:21.791656       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:05:21.791791       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:05:21.791788       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:05:21.791825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0cb8e7d7f5828ed49da00ef084d909803387d6aa19372cddbc95695fcf02872] <==
	E1205 07:05:13.281585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:05:13.281614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:05:13.281658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:05:13.281690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:05:13.281734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:05:13.281785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:05:13.281891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:05:13.282307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 07:05:13.282479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:05:14.096765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 07:05:14.117851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:05:14.149015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:05:14.222293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:05:14.273812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:05:14.285903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 07:05:14.288941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 07:05:14.290768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:05:14.296671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 07:05:14.340579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:05:14.373796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:05:14.373906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:05:14.411349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:05:14.424244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 07:05:14.449271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1205 07:05:16.274371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:05:16 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:16.774367    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-172186" podStartSLOduration=1.77434838 podStartE2EDuration="1.77434838s" podCreationTimestamp="2025-12-05 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:16.76748148 +0000 UTC m=+1.127221907" watchObservedRunningTime="2025-12-05 07:05:16.77434838 +0000 UTC m=+1.134088808"
	Dec 05 07:05:16 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:16.784815    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-172186" podStartSLOduration=1.784794795 podStartE2EDuration="1.784794795s" podCreationTimestamp="2025-12-05 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:16.77428499 +0000 UTC m=+1.134025418" watchObservedRunningTime="2025-12-05 07:05:16.784794795 +0000 UTC m=+1.144535244"
	Dec 05 07:05:16 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:16.784934    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-172186" podStartSLOduration=1.784924922 podStartE2EDuration="1.784924922s" podCreationTimestamp="2025-12-05 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:16.784689901 +0000 UTC m=+1.144430331" watchObservedRunningTime="2025-12-05 07:05:16.784924922 +0000 UTC m=+1.144665351"
	Dec 05 07:05:20 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:20.348957    1344 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 07:05:20 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:20.349717    1344 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140727    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c1a939e-c7e6-4202-bffa-374ace420fd7-kube-proxy\") pod \"kube-proxy-fpss6\" (UID: \"9c1a939e-c7e6-4202-bffa-374ace420fd7\") " pod="kube-system/kube-proxy-fpss6"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140768    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgdh\" (UniqueName: \"kubernetes.io/projected/9c1a939e-c7e6-4202-bffa-374ace420fd7-kube-api-access-rjgdh\") pod \"kube-proxy-fpss6\" (UID: \"9c1a939e-c7e6-4202-bffa-374ace420fd7\") " pod="kube-system/kube-proxy-fpss6"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140795    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de2accc-6a87-4b4c-920d-74d5b5058c8e-xtables-lock\") pod \"kindnet-w2mzg\" (UID: \"3de2accc-6a87-4b4c-920d-74d5b5058c8e\") " pod="kube-system/kindnet-w2mzg"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140819    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3de2accc-6a87-4b4c-920d-74d5b5058c8e-lib-modules\") pod \"kindnet-w2mzg\" (UID: \"3de2accc-6a87-4b4c-920d-74d5b5058c8e\") " pod="kube-system/kindnet-w2mzg"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140834    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c1a939e-c7e6-4202-bffa-374ace420fd7-xtables-lock\") pod \"kube-proxy-fpss6\" (UID: \"9c1a939e-c7e6-4202-bffa-374ace420fd7\") " pod="kube-system/kube-proxy-fpss6"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140847    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c1a939e-c7e6-4202-bffa-374ace420fd7-lib-modules\") pod \"kube-proxy-fpss6\" (UID: \"9c1a939e-c7e6-4202-bffa-374ace420fd7\") " pod="kube-system/kube-proxy-fpss6"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140866    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3de2accc-6a87-4b4c-920d-74d5b5058c8e-cni-cfg\") pod \"kindnet-w2mzg\" (UID: \"3de2accc-6a87-4b4c-920d-74d5b5058c8e\") " pod="kube-system/kindnet-w2mzg"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.140891    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654cx\" (UniqueName: \"kubernetes.io/projected/3de2accc-6a87-4b4c-920d-74d5b5058c8e-kube-api-access-654cx\") pod \"kindnet-w2mzg\" (UID: \"3de2accc-6a87-4b4c-920d-74d5b5058c8e\") " pod="kube-system/kindnet-w2mzg"
	Dec 05 07:05:21 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:21.762162    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fpss6" podStartSLOduration=0.762140734 podStartE2EDuration="762.140734ms" podCreationTimestamp="2025-12-05 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:21.761654349 +0000 UTC m=+6.121394792" watchObservedRunningTime="2025-12-05 07:05:21.762140734 +0000 UTC m=+6.121881162"
	Dec 05 07:05:22 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:22.052827    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w2mzg" podStartSLOduration=1.052781463 podStartE2EDuration="1.052781463s" podCreationTimestamp="2025-12-05 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:21.770866716 +0000 UTC m=+6.130607146" watchObservedRunningTime="2025-12-05 07:05:22.052781463 +0000 UTC m=+6.412521883"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.093006    1344 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.221181    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee60b2ad-840a-442d-9475-85e27048c452-config-volume\") pod \"coredns-66bc5c9577-lzlm8\" (UID: \"ee60b2ad-840a-442d-9475-85e27048c452\") " pod="kube-system/coredns-66bc5c9577-lzlm8"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.221254    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpk6\" (UniqueName: \"kubernetes.io/projected/cf31286d-bf29-4883-828c-4e9aee83201f-kube-api-access-2wpk6\") pod \"storage-provisioner\" (UID: \"cf31286d-bf29-4883-828c-4e9aee83201f\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.221317    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq744\" (UniqueName: \"kubernetes.io/projected/ee60b2ad-840a-442d-9475-85e27048c452-kube-api-access-wq744\") pod \"coredns-66bc5c9577-lzlm8\" (UID: \"ee60b2ad-840a-442d-9475-85e27048c452\") " pod="kube-system/coredns-66bc5c9577-lzlm8"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.221415    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf31286d-bf29-4883-828c-4e9aee83201f-tmp\") pod \"storage-provisioner\" (UID: \"cf31286d-bf29-4883-828c-4e9aee83201f\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:32 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:32.786267    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.786247511 podStartE2EDuration="10.786247511s" podCreationTimestamp="2025-12-05 07:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:32.786173059 +0000 UTC m=+17.145913488" watchObservedRunningTime="2025-12-05 07:05:32.786247511 +0000 UTC m=+17.145987939"
	Dec 05 07:05:34 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:34.937920    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lzlm8" podStartSLOduration=13.93789218 podStartE2EDuration="13.93789218s" podCreationTimestamp="2025-12-05 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:32.79480298 +0000 UTC m=+17.154543421" watchObservedRunningTime="2025-12-05 07:05:34.93789218 +0000 UTC m=+19.297632607"
	Dec 05 07:05:35 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:35.039527    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5pz8\" (UniqueName: \"kubernetes.io/projected/17b6c1ea-a6af-43b5-91c4-189bf0265bc6-kube-api-access-q5pz8\") pod \"busybox\" (UID: \"17b6c1ea-a6af-43b5-91c4-189bf0265bc6\") " pod="default/busybox"
	Dec 05 07:05:36 default-k8s-diff-port-172186 kubelet[1344]: I1205 07:05:36.800901    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.108223665 podStartE2EDuration="2.800881092s" podCreationTimestamp="2025-12-05 07:05:34 +0000 UTC" firstStartedPulling="2025-12-05 07:05:35.260299136 +0000 UTC m=+19.620039543" lastFinishedPulling="2025-12-05 07:05:35.952956549 +0000 UTC m=+20.312696970" observedRunningTime="2025-12-05 07:05:36.800753732 +0000 UTC m=+21.160494159" watchObservedRunningTime="2025-12-05 07:05:36.800881092 +0000 UTC m=+21.160621521"
	Dec 05 07:05:42 default-k8s-diff-port-172186 kubelet[1344]: E1205 07:05:42.046427    1344 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37314->127.0.0.1:43373: write tcp 127.0.0.1:37314->127.0.0.1:43373: write: broken pipe
	
	
	==> storage-provisioner [179686bb23fd37735151e87dddd02f31f461c5fe1c3308c8f6d24cfff552844c] <==
	I1205 07:05:32.485887       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:05:32.493385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:05:32.493447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:05:32.495815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:32.500966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:32.501143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:05:32.501658       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_57aeee32-21f0-4b07-816e-e9d71f343c6e!
	I1205 07:05:32.501708       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73acd703-8958-4c9b-a71e-6ab66433bd8b", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-172186_57aeee32-21f0-4b07-816e-e9d71f343c6e became leader
	W1205 07:05:32.505270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:32.509613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:32.602832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_57aeee32-21f0-4b07-816e-e9d71f343c6e!
	W1205 07:05:34.512639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:34.516303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:36.518948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:36.523283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:38.526444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:38.531390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:40.534240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:40.537845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:42.542223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:42.547519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.595127ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-770390 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-770390 describe deploy/metrics-server -n kube-system: exit status 1 (60.346585ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-770390 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-770390
helpers_test.go:243: (dbg) docker inspect embed-certs-770390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	        "Created": "2025-12-05T07:04:47.935376196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:04:48.184765053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hostname",
	        "HostsPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hosts",
	        "LogPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15-json.log",
	        "Name": "/embed-certs-770390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-770390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-770390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	                "LowerDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-770390",
	                "Source": "/var/lib/docker/volumes/embed-certs-770390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-770390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-770390",
	                "name.minikube.sigs.k8s.io": "embed-certs-770390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf44099a4f8ac04a1e0ed24b8ce82c9cb7fe06b04ec4c03ad209756c64539ec7",
	            "SandboxKey": "/var/run/docker/netns/cf44099a4f8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-770390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "931902d22986d998cad8286fbe16fdac2b5321eb6ca6ce1a3581e586ebb4b1ac",
	                    "EndpointID": "d2ce2a9cd1d6a060e7e3c06a61844e335b4a34d78bad71fecbc3557747b08ad5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:5e:e9:72:af:5d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-770390",
	                        "efaf2da28c0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-770390 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-770390 logs -n 25: (1.409817042s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-397607 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │                     │
	│ ssh     │ -p bridge-397607 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                                                                                                  │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                               │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:01.180353  369138 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:01.180586  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180595  369138 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:01.180598  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180785  369138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:01.181188  369138 out.go:368] Setting JSON to false
	I1205 07:06:01.182372  369138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6505,"bootTime":1764911856,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:01.182422  369138 start.go:143] virtualization: kvm guest
	I1205 07:06:01.183964  369138 out.go:179] * [default-k8s-diff-port-172186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:01.185424  369138 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:01.185435  369138 notify.go:221] Checking for updates...
	I1205 07:06:01.187226  369138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:01.188220  369138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:01.189317  369138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:01.190301  369138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:01.191442  369138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:01.192978  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:01.193475  369138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:01.217006  369138 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:01.217083  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.269057  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.259668248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.269161  369138 docker.go:319] overlay module found
	I1205 07:06:01.270726  369138 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:01.273527  369138 start.go:309] selected driver: docker
	I1205 07:06:01.273546  369138 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.273660  369138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:01.274285  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.328638  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.319808984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.328902  369138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:01.328935  369138 cni.go:84] Creating CNI manager for ""
	I1205 07:06:01.328984  369138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:01.329017  369138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.330498  369138 out.go:179] * Starting "default-k8s-diff-port-172186" primary control-plane node in "default-k8s-diff-port-172186" cluster
	I1205 07:06:01.331537  369138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:01.332633  369138 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:01.333495  369138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:01.333520  369138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:01.333527  369138 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:01.333590  369138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:01.333612  369138 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:01.333619  369138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:01.333694  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.352461  369138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:01.352477  369138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:01.352490  369138 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:01.352512  369138 start.go:360] acquireMachinesLock for default-k8s-diff-port-172186: {Name:mkc7b70f4fd2c66eec9f181ab0dc691b16be91dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:01.352565  369138 start.go:364] duration metric: took 31.412µs to acquireMachinesLock for "default-k8s-diff-port-172186"
	I1205 07:06:01.352581  369138 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:01.352586  369138 fix.go:54] fixHost starting: 
	I1205 07:06:01.352769  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.368837  369138 fix.go:112] recreateIfNeeded on default-k8s-diff-port-172186: state=Stopped err=<nil>
	W1205 07:06:01.368859  369138 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:05:59.098239  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:06:01.098851  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	I1205 07:06:02.598698  361350 pod_ready.go:94] pod "coredns-5dd5756b68-srvvk" is "Ready"
	I1205 07:06:02.598728  361350 pod_ready.go:86] duration metric: took 33.506059911s for pod "coredns-5dd5756b68-srvvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.601667  361350 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.606548  361350 pod_ready.go:94] pod "etcd-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.606569  361350 pod_ready.go:86] duration metric: took 4.878762ms for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.609599  361350 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.614289  361350 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.614308  361350 pod_ready.go:86] duration metric: took 4.692692ms for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.617295  361350 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.795595  361350 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.795632  361350 pod_ready.go:86] duration metric: took 178.308346ms for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.997254  361350 pod_ready.go:83] waiting for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.396528  361350 pod_ready.go:94] pod "kube-proxy-98jls" is "Ready"
	I1205 07:06:03.396554  361350 pod_ready.go:86] duration metric: took 399.27461ms for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.597674  361350 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:58.862201  366710 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:05:58.867008  366710 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:05:58.867995  366710 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:05:58.868017  366710 api_server.go:131] duration metric: took 1.006376467s to wait for apiserver health ...
	I1205 07:05:58.868026  366710 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:58.871519  366710 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:58.871555  366710 system_pods.go:61] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.871566  366710 system_pods.go:61] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.871579  366710 system_pods.go:61] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.871585  366710 system_pods.go:61] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.871593  366710 system_pods.go:61] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.871598  366710 system_pods.go:61] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.871609  366710 system_pods.go:61] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.871616  366710 system_pods.go:61] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.871628  366710 system_pods.go:74] duration metric: took 3.595932ms to wait for pod list to return data ...
	I1205 07:05:58.871641  366710 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:58.873971  366710 default_sa.go:45] found service account: "default"
	I1205 07:05:58.873989  366710 default_sa.go:55] duration metric: took 2.342026ms for default service account to be created ...
	I1205 07:05:58.873999  366710 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:58.876526  366710 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:58.876552  366710 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.876564  366710 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.876572  366710 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.876578  366710 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.876584  366710 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.876592  366710 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.876597  366710 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.876605  366710 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.876611  366710 system_pods.go:126] duration metric: took 2.607202ms to wait for k8s-apps to be running ...
	I1205 07:05:58.876620  366710 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:58.876654  366710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:58.889311  366710 system_svc.go:56] duration metric: took 12.685986ms WaitForService to wait for kubelet
	I1205 07:05:58.889358  366710 kubeadm.go:587] duration metric: took 3.2316491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:58.889379  366710 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:58.891693  366710 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:58.891712  366710 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:58.891725  366710 node_conditions.go:105] duration metric: took 2.341752ms to run NodePressure ...
	I1205 07:05:58.891735  366710 start.go:242] waiting for startup goroutines ...
	I1205 07:05:58.891745  366710 start.go:247] waiting for cluster config update ...
	I1205 07:05:58.891760  366710 start.go:256] writing updated cluster config ...
	I1205 07:05:58.891980  366710 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:58.895376  366710 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:58.898174  366710 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:00.903613  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:03.403874  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:03.996446  361350 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-874709" is "Ready"
	I1205 07:06:03.996477  361350 pod_ready.go:86] duration metric: took 398.777833ms for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.996491  361350 pod_ready.go:40] duration metric: took 34.907225297s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:04.054517  361350 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1205 07:06:04.057064  361350 out.go:203] 
	W1205 07:06:04.058523  361350 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1205 07:06:04.059711  361350 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1205 07:06:04.060978  361350 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-874709" cluster and "default" namespace by default
	I1205 07:06:01.370314  369138 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-172186" ...
	I1205 07:06:01.370393  369138 cli_runner.go:164] Run: docker start default-k8s-diff-port-172186
	I1205 07:06:01.617870  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.636485  369138 kic.go:430] container "default-k8s-diff-port-172186" state is running.
	I1205 07:06:01.636802  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:01.654671  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.654872  369138 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:01.654941  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:01.673701  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:01.673924  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:01.673936  369138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:01.674676  369138 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46964->127.0.0.1:33123: read: connection reset by peer
	I1205 07:06:04.821968  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:04.821994  369138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-172186"
	I1205 07:06:04.822076  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:04.844977  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:04.845221  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:04.845236  369138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-172186 && echo "default-k8s-diff-port-172186" | sudo tee /etc/hostname
	I1205 07:06:05.021790  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:05.021876  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.048047  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.048394  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.048426  369138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-172186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-172186/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-172186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:05.207090  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:05.207125  369138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:05.207167  369138 ubuntu.go:190] setting up certificates
	I1205 07:06:05.207177  369138 provision.go:84] configureAuth start
	I1205 07:06:05.207255  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:05.232395  369138 provision.go:143] copyHostCerts
	I1205 07:06:05.232460  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:05.232471  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:05.232555  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:05.232703  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:05.232719  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:05.232765  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:05.232861  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:05.232872  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:05.232911  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:05.232988  369138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-172186 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-172186 localhost minikube]
	I1205 07:06:05.364735  369138 provision.go:177] copyRemoteCerts
	I1205 07:06:05.364786  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:05.364817  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.388117  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:05.499381  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:05.522631  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 07:06:05.545521  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:05.568070  369138 provision.go:87] duration metric: took 360.875348ms to configureAuth
	I1205 07:06:05.568099  369138 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:05.568372  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:05.568548  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.590384  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.590652  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.590675  369138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Dec 05 07:05:54 embed-certs-770390 crio[776]: time="2025-12-05T07:05:54.584683897Z" level=info msg="Starting container: 728f4b4ce742467f560112c5d42e3a8fd735f37a282cba7d2672839023b8cb81" id=1225e3e3-5958-4045-81df-fdda7a0298b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:54 embed-certs-770390 crio[776]: time="2025-12-05T07:05:54.586693318Z" level=info msg="Started container" PID=1890 containerID=728f4b4ce742467f560112c5d42e3a8fd735f37a282cba7d2672839023b8cb81 description=kube-system/coredns-66bc5c9577-rg55r/coredns id=1225e3e3-5958-4045-81df-fdda7a0298b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb16c1550d05469769283c16c3d2882c3d17b28f7d7bab017a777fea8950753e
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.766690344Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3b6a5c0f-473f-45ee-9d11-42f5acfb09b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.766773982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.771347324Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:14afbd8f69540f272b3de2d2b6328c41c1fac25d8a9e35dfeb5af95f6231f0f9 UID:a67b9028-baba-44af-9d25-db1f756f4ab3 NetNS:/var/run/netns/d3437961-8e80-4586-a476-c1800df6c829 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000156518}] Aliases:map[]}"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.771385159Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.781559676Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:14afbd8f69540f272b3de2d2b6328c41c1fac25d8a9e35dfeb5af95f6231f0f9 UID:a67b9028-baba-44af-9d25-db1f756f4ab3 NetNS:/var/run/netns/d3437961-8e80-4586-a476-c1800df6c829 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000156518}] Aliases:map[]}"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.781668074Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.782399102Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.783518541Z" level=info msg="Ran pod sandbox 14afbd8f69540f272b3de2d2b6328c41c1fac25d8a9e35dfeb5af95f6231f0f9 with infra container: default/busybox/POD" id=3b6a5c0f-473f-45ee-9d11-42f5acfb09b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.784720324Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0700572e-da59-4055-8c25-742bde3c68dd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.784874758Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0700572e-da59-4055-8c25-742bde3c68dd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.784924375Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0700572e-da59-4055-8c25-742bde3c68dd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.785673302Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c5a8027a-4ef0-4fad-81e9-79ef260e3b91 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:58 embed-certs-770390 crio[776]: time="2025-12-05T07:05:58.789482192Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.503684258Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c5a8027a-4ef0-4fad-81e9-79ef260e3b91 name=/runtime.v1.ImageService/PullImage
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.504305507Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6d92f84b-fe0d-46b3-ac82-1cb8b36c7a20 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.505534219Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82304004-4dae-4f80-9f70-ffb6f8aaa630 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.508437357Z" level=info msg="Creating container: default/busybox/busybox" id=e5f99707-60ab-4067-b385-92fe5e75d8e2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.508534345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.512119249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.512535796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.547901614Z" level=info msg="Created container a24c8dfdfe6c3834a37367a97de4b70f2a5b52c964c2060a5ee303a4cd14a106: default/busybox/busybox" id=e5f99707-60ab-4067-b385-92fe5e75d8e2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.548406278Z" level=info msg="Starting container: a24c8dfdfe6c3834a37367a97de4b70f2a5b52c964c2060a5ee303a4cd14a106" id=1bd14736-3b6f-4e12-b316-5c54f46f0c66 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:59 embed-certs-770390 crio[776]: time="2025-12-05T07:05:59.550274093Z" level=info msg="Started container" PID=1966 containerID=a24c8dfdfe6c3834a37367a97de4b70f2a5b52c964c2060a5ee303a4cd14a106 description=default/busybox/busybox id=1bd14736-3b6f-4e12-b316-5c54f46f0c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14afbd8f69540f272b3de2d2b6328c41c1fac25d8a9e35dfeb5af95f6231f0f9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a24c8dfdfe6c3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   14afbd8f69540       busybox                                      default
	728f4b4ce7424       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   eb16c1550d054       coredns-66bc5c9577-rg55r                     kube-system
	0fc76cf8d2058       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   4bc55e02df6e2       storage-provisioner                          kube-system
	b8d2d4996309d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      52 seconds ago       Running             kube-proxy                0                   c0d4a931fedf0       kube-proxy-7bjnn                             kube-system
	d7620d6d567cc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   507d231666729       kindnet-dmpt2                                kube-system
	2630d90760d9b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      About a minute ago   Running             kube-controller-manager   0                   76a5f6162e75b       kube-controller-manager-embed-certs-770390   kube-system
	131cf0c1e7e29       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      About a minute ago   Running             kube-scheduler            0                   67aedcd463868       kube-scheduler-embed-certs-770390            kube-system
	56d0559b60a0c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      About a minute ago   Running             etcd                      0                   d0da91650c102       etcd-embed-certs-770390                      kube-system
	06af11b235ca0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      About a minute ago   Running             kube-apiserver            0                   67a451e2c1c80       kube-apiserver-embed-certs-770390            kube-system
	
	
	==> coredns [728f4b4ce742467f560112c5d42e3a8fd735f37a282cba7d2672839023b8cb81] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35156 - 10303 "HINFO IN 4079910521001192241.2103010645049016771. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.465209674s
	
	
	==> describe nodes <==
	Name:               embed-certs-770390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-770390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=embed-certs-770390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-770390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:05:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:05:54 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:05:54 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:05:54 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:05:54 +0000   Fri, 05 Dec 2025 07:05:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-770390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                6db5accb-9611-4107-b9f0-962216d17807
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-rg55r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-embed-certs-770390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-dmpt2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-770390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-770390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-7bjnn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-770390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node embed-certs-770390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node embed-certs-770390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node embed-certs-770390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node embed-certs-770390 event: Registered Node embed-certs-770390 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-770390 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [56d0559b60a0c13c5d40cadee8dd70942c055f310cdb9f560ef38f7cc48f4be3] <==
	{"level":"warn","ts":"2025-12-05T07:05:04.403895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.414399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.421754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.428466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.435649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.443583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.450901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.456996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.463343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.474425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.480751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.487471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.493712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.500671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.507332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.514299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.522058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.528350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.534713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.542622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.549112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.569763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.575971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.583507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:04.638204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:06:07 up  1:48,  0 user,  load average: 3.75, 3.29, 2.20
	Linux embed-certs-770390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7620d6d567cc8dfaffb8c08f21ef6010b9de213d29868ebe136ec84172aa87d] <==
	I1205 07:05:13.811508       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:13.811744       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1205 07:05:13.811904       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:13.811932       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:13.811960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:14.012617       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:14.012672       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:14.012696       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:14.013205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1205 07:05:44.013729       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1205 07:05:44.013732       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1205 07:05:44.013778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 07:05:44.035509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1205 07:05:45.417585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:45.417685       1 metrics.go:72] Registering metrics
	I1205 07:05:45.418074       1 controller.go:711] "Syncing nftables rules"
	I1205 07:05:54.019043       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:05:54.019099       1 main.go:301] handling current node
	I1205 07:06:04.016221       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:06:04.016253       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06af11b235ca0cdc3736cc91427bd5301fa4bc505b45ca6a2131534196f9df28] <==
	I1205 07:05:05.142396       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:05:05.143074       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:05:05.143103       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1205 07:05:05.151377       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1205 07:05:05.156589       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:05:05.180192       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:05.346036       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:06.045140       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 07:05:06.049124       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 07:05:06.049141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:05:06.543449       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:06.581621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:06.655384       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 07:05:06.664158       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1205 07:05:06.665634       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:05:06.670796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:05:07.065356       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:05:07.590464       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:05:07.599447       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 07:05:07.608100       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 07:05:12.167023       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:05:12.718366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:12.722369       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:05:13.116405       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1205 07:06:05.573710       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:42864: use of closed network connection
	
	
	==> kube-controller-manager [2630d90760d9b6fff133ddf767a220755fbf41c4fb5b6a85282e9f7eab628707] <==
	I1205 07:05:12.033578       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1205 07:05:12.041844       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:05:12.049126       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1205 07:05:12.055390       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:05:12.061771       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:05:12.062861       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 07:05:12.062895       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:05:12.062935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 07:05:12.062964       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 07:05:12.064206       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1205 07:05:12.064241       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 07:05:12.064257       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 07:05:12.064656       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:05:12.064690       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 07:05:12.064726       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 07:05:12.064947       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 07:05:12.064993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 07:05:12.064930       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:05:12.067238       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:05:12.068522       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:05:12.069770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 07:05:12.073511       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:05:12.076631       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 07:05:12.084823       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:05:57.173827       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8d2d4996309db776c5dd40eacdd5a4512cead8fd77f7d5f21ee1c2378942c68] <==
	I1205 07:05:15.042967       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:05:15.102508       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:05:15.202627       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:05:15.202659       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1205 07:05:15.202798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:05:15.221241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:15.221299       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:05:15.226691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:05:15.227073       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:05:15.227115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:15.228450       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:05:15.228470       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:05:15.228488       1 config.go:200] "Starting service config controller"
	I1205 07:05:15.228502       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:05:15.228515       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:05:15.228526       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:05:15.228640       1 config.go:309] "Starting node config controller"
	I1205 07:05:15.228688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:05:15.228698       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:05:15.329554       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:05:15.329575       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:05:15.329576       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [131cf0c1e7e29d8b99ee7c5ab9c8f8a1481e242e10aa04e3131ea0018c2d06a8] <==
	E1205 07:05:05.104110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:05:05.104144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:05:05.104167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 07:05:05.104207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:05:05.104216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:05:05.104207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 07:05:05.104248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 07:05:05.104279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:05:05.104307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:05:05.104365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 07:05:05.104530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:05:05.964934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:05:06.002226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:05:06.007761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 07:05:06.036425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:05:06.036507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 07:05:06.043087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:05:06.054586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 07:05:06.077835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:05:06.088175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:05:06.096661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:05:06.178225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:05:06.191659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 07:05:06.349583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1205 07:05:09.199641       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:05:08 embed-certs-770390 kubelet[1340]: I1205 07:05:08.526165    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-770390" podStartSLOduration=2.526145916 podStartE2EDuration="2.526145916s" podCreationTimestamp="2025-12-05 07:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:08.517589748 +0000 UTC m=+1.170114352" watchObservedRunningTime="2025-12-05 07:05:08.526145916 +0000 UTC m=+1.178670522"
	Dec 05 07:05:12 embed-certs-770390 kubelet[1340]: I1205 07:05:12.081500    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 07:05:12 embed-certs-770390 kubelet[1340]: I1205 07:05:12.082304    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: E1205 07:05:13.163427    1340 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-770390\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-770390' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172167    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-kube-proxy\") pod \"kube-proxy-7bjnn\" (UID: \"6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44\") " pod="kube-system/kube-proxy-7bjnn"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172217    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-xtables-lock\") pod \"kube-proxy-7bjnn\" (UID: \"6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44\") " pod="kube-system/kube-proxy-7bjnn"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172241    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/66c4a813-7f26-44e7-ab6f-be6422d710e6-cni-cfg\") pod \"kindnet-dmpt2\" (UID: \"66c4a813-7f26-44e7-ab6f-be6422d710e6\") " pod="kube-system/kindnet-dmpt2"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172281    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66c4a813-7f26-44e7-ab6f-be6422d710e6-xtables-lock\") pod \"kindnet-dmpt2\" (UID: \"66c4a813-7f26-44e7-ab6f-be6422d710e6\") " pod="kube-system/kindnet-dmpt2"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172302    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wl2\" (UniqueName: \"kubernetes.io/projected/66c4a813-7f26-44e7-ab6f-be6422d710e6-kube-api-access-v6wl2\") pod \"kindnet-dmpt2\" (UID: \"66c4a813-7f26-44e7-ab6f-be6422d710e6\") " pod="kube-system/kindnet-dmpt2"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172342    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-lib-modules\") pod \"kube-proxy-7bjnn\" (UID: \"6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44\") " pod="kube-system/kube-proxy-7bjnn"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172365    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2b4x\" (UniqueName: \"kubernetes.io/projected/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-kube-api-access-q2b4x\") pod \"kube-proxy-7bjnn\" (UID: \"6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44\") " pod="kube-system/kube-proxy-7bjnn"
	Dec 05 07:05:13 embed-certs-770390 kubelet[1340]: I1205 07:05:13.172409    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66c4a813-7f26-44e7-ab6f-be6422d710e6-lib-modules\") pod \"kindnet-dmpt2\" (UID: \"66c4a813-7f26-44e7-ab6f-be6422d710e6\") " pod="kube-system/kindnet-dmpt2"
	Dec 05 07:05:14 embed-certs-770390 kubelet[1340]: E1205 07:05:14.273493    1340 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 05 07:05:14 embed-certs-770390 kubelet[1340]: E1205 07:05:14.273623    1340 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-kube-proxy podName:6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:14.773587148 +0000 UTC m=+7.426111730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44-kube-proxy") pod "kube-proxy-7bjnn" (UID: "6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44") : failed to sync configmap cache: timed out waiting for the condition
	Dec 05 07:05:14 embed-certs-770390 kubelet[1340]: I1205 07:05:14.518718    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dmpt2" podStartSLOduration=1.518695615 podStartE2EDuration="1.518695615s" podCreationTimestamp="2025-12-05 07:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:14.505076505 +0000 UTC m=+7.157601118" watchObservedRunningTime="2025-12-05 07:05:14.518695615 +0000 UTC m=+7.171220243"
	Dec 05 07:05:15 embed-certs-770390 kubelet[1340]: I1205 07:05:15.496195    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7bjnn" podStartSLOduration=2.4961717930000002 podStartE2EDuration="2.496171793s" podCreationTimestamp="2025-12-05 07:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:15.495773518 +0000 UTC m=+8.148298123" watchObservedRunningTime="2025-12-05 07:05:15.496171793 +0000 UTC m=+8.148696398"
	Dec 05 07:05:54 embed-certs-770390 kubelet[1340]: I1205 07:05:54.210583    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 05 07:05:54 embed-certs-770390 kubelet[1340]: I1205 07:05:54.260310    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pms84\" (UniqueName: \"kubernetes.io/projected/5c5ef936-ac84-44f0-8299-e431bcbbf8d9-kube-api-access-pms84\") pod \"storage-provisioner\" (UID: \"5c5ef936-ac84-44f0-8299-e431bcbbf8d9\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:54 embed-certs-770390 kubelet[1340]: I1205 07:05:54.260385    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c5ef936-ac84-44f0-8299-e431bcbbf8d9-tmp\") pod \"storage-provisioner\" (UID: \"5c5ef936-ac84-44f0-8299-e431bcbbf8d9\") " pod="kube-system/storage-provisioner"
	Dec 05 07:05:54 embed-certs-770390 kubelet[1340]: I1205 07:05:54.364283    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68bcad40-cb20-4ded-b15a-268ddb469470-config-volume\") pod \"coredns-66bc5c9577-rg55r\" (UID: \"68bcad40-cb20-4ded-b15a-268ddb469470\") " pod="kube-system/coredns-66bc5c9577-rg55r"
	Dec 05 07:05:54 embed-certs-770390 kubelet[1340]: I1205 07:05:54.364368    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrxn\" (UniqueName: \"kubernetes.io/projected/68bcad40-cb20-4ded-b15a-268ddb469470-kube-api-access-clrxn\") pod \"coredns-66bc5c9577-rg55r\" (UID: \"68bcad40-cb20-4ded-b15a-268ddb469470\") " pod="kube-system/coredns-66bc5c9577-rg55r"
	Dec 05 07:05:55 embed-certs-770390 kubelet[1340]: I1205 07:05:55.600838    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.600390346 podStartE2EDuration="42.600390346s" podCreationTimestamp="2025-12-05 07:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:55.599935613 +0000 UTC m=+48.252460212" watchObservedRunningTime="2025-12-05 07:05:55.600390346 +0000 UTC m=+48.252914949"
	Dec 05 07:05:55 embed-certs-770390 kubelet[1340]: I1205 07:05:55.601069    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rg55r" podStartSLOduration=42.601058062999996 podStartE2EDuration="42.601058063s" podCreationTimestamp="2025-12-05 07:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:55.587735276 +0000 UTC m=+48.240259906" watchObservedRunningTime="2025-12-05 07:05:55.601058063 +0000 UTC m=+48.253582666"
	Dec 05 07:05:58 embed-certs-770390 kubelet[1340]: I1205 07:05:58.589559    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2488w\" (UniqueName: \"kubernetes.io/projected/a67b9028-baba-44af-9d25-db1f756f4ab3-kube-api-access-2488w\") pod \"busybox\" (UID: \"a67b9028-baba-44af-9d25-db1f756f4ab3\") " pod="default/busybox"
	Dec 05 07:05:59 embed-certs-770390 kubelet[1340]: I1205 07:05:59.589283    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.869392838 podStartE2EDuration="1.58925939s" podCreationTimestamp="2025-12-05 07:05:58 +0000 UTC" firstStartedPulling="2025-12-05 07:05:58.785205258 +0000 UTC m=+51.437729840" lastFinishedPulling="2025-12-05 07:05:59.505071804 +0000 UTC m=+52.157596392" observedRunningTime="2025-12-05 07:05:59.588990253 +0000 UTC m=+52.241514856" watchObservedRunningTime="2025-12-05 07:05:59.58925939 +0000 UTC m=+52.241783993"
	
	
	==> storage-provisioner [0fc76cf8d2058d5690b4c13c655633b4ecf8389746f3222e111c9b21de39e97b] <==
	I1205 07:05:54.596423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:05:54.604000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:05:54.604050       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:05:54.606379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:54.612644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:54.612934       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:05:54.613182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-770390_92dfeb30-413b-4541-b12c-37f3232d8de0!
	I1205 07:05:54.613284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2811ca68-8b79-41ee-908b-89fe569de67c", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-770390_92dfeb30-413b-4541-b12c-37f3232d8de0 became leader
	W1205 07:05:54.617376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:54.621728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:05:54.713594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-770390_92dfeb30-413b-4541-b12c-37f3232d8de0!
	W1205 07:05:56.624876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:56.628823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:58.632349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:05:58.637074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:00.640579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:00.644008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:02.647955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:02.653645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:04.657127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:04.661034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:06.670901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:06.720429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-770390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-874709 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-874709 --alsologtostderr -v=1: exit status 80 (2.470753405s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-874709 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:06:15.871638  372737 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:15.871752  372737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:15.871763  372737 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:15.871770  372737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:15.872007  372737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:15.872353  372737 out.go:368] Setting JSON to false
	I1205 07:06:15.872371  372737 mustload.go:66] Loading cluster: old-k8s-version-874709
	I1205 07:06:15.872855  372737 config.go:182] Loaded profile config "old-k8s-version-874709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:06:15.873428  372737 cli_runner.go:164] Run: docker container inspect old-k8s-version-874709 --format={{.State.Status}}
	I1205 07:06:15.893448  372737 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:06:15.893731  372737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:15.949472  372737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-05 07:06:15.93889453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:15.950522  372737 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-874709 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:06:15.953244  372737 out.go:179] * Pausing node old-k8s-version-874709 ... 
	I1205 07:06:15.954346  372737 host.go:66] Checking if "old-k8s-version-874709" exists ...
	I1205 07:06:15.954555  372737 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:15.954591  372737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-874709
	I1205 07:06:15.972299  372737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/old-k8s-version-874709/id_rsa Username:docker}
	I1205 07:06:16.069538  372737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:16.081880  372737 pause.go:52] kubelet running: true
	I1205 07:06:16.081949  372737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:16.263060  372737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:16.263170  372737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:16.330545  372737 cri.go:89] found id: "9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244"
	I1205 07:06:16.330569  372737 cri.go:89] found id: "52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797"
	I1205 07:06:16.330573  372737 cri.go:89] found id: "87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	I1205 07:06:16.330577  372737 cri.go:89] found id: "189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3"
	I1205 07:06:16.330580  372737 cri.go:89] found id: "d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0"
	I1205 07:06:16.330585  372737 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:06:16.330588  372737 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:06:16.330591  372737 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:06:16.330596  372737 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:06:16.330605  372737 cri.go:89] found id: "f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	I1205 07:06:16.330609  372737 cri.go:89] found id: "64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a"
	I1205 07:06:16.330613  372737 cri.go:89] found id: ""
	I1205 07:06:16.330661  372737 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:16.342404  372737 retry.go:31] will retry after 214.463726ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:16Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:16.557862  372737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:16.572651  372737 pause.go:52] kubelet running: false
	I1205 07:06:16.572712  372737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:16.778446  372737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:16.778563  372737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:16.862361  372737 cri.go:89] found id: "9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244"
	I1205 07:06:16.862386  372737 cri.go:89] found id: "52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797"
	I1205 07:06:16.862392  372737 cri.go:89] found id: "87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	I1205 07:06:16.862397  372737 cri.go:89] found id: "189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3"
	I1205 07:06:16.862401  372737 cri.go:89] found id: "d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0"
	I1205 07:06:16.862407  372737 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:06:16.862411  372737 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:06:16.862416  372737 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:06:16.862420  372737 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:06:16.862429  372737 cri.go:89] found id: "f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	I1205 07:06:16.862437  372737 cri.go:89] found id: "64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a"
	I1205 07:06:16.862442  372737 cri.go:89] found id: ""
	I1205 07:06:16.862489  372737 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:16.876819  372737 retry.go:31] will retry after 401.320923ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:16Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:17.278391  372737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:17.294354  372737 pause.go:52] kubelet running: false
	I1205 07:06:17.294417  372737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:17.483721  372737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:17.483808  372737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:17.566632  372737 cri.go:89] found id: "9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244"
	I1205 07:06:17.566660  372737 cri.go:89] found id: "52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797"
	I1205 07:06:17.566667  372737 cri.go:89] found id: "87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	I1205 07:06:17.566672  372737 cri.go:89] found id: "189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3"
	I1205 07:06:17.566676  372737 cri.go:89] found id: "d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0"
	I1205 07:06:17.566682  372737 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:06:17.566686  372737 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:06:17.566691  372737 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:06:17.566695  372737 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:06:17.566706  372737 cri.go:89] found id: "f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	I1205 07:06:17.566714  372737 cri.go:89] found id: "64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a"
	I1205 07:06:17.566719  372737 cri.go:89] found id: ""
	I1205 07:06:17.566766  372737 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:17.582025  372737 retry.go:31] will retry after 363.000748ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:17Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:17.945433  372737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:17.961489  372737 pause.go:52] kubelet running: false
	I1205 07:06:17.961554  372737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:18.160992  372737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:18.161077  372737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:18.242642  372737 cri.go:89] found id: "9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244"
	I1205 07:06:18.242667  372737 cri.go:89] found id: "52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797"
	I1205 07:06:18.242674  372737 cri.go:89] found id: "87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	I1205 07:06:18.242679  372737 cri.go:89] found id: "189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3"
	I1205 07:06:18.242684  372737 cri.go:89] found id: "d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0"
	I1205 07:06:18.242689  372737 cri.go:89] found id: "a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d"
	I1205 07:06:18.242693  372737 cri.go:89] found id: "6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b"
	I1205 07:06:18.242697  372737 cri.go:89] found id: "7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08"
	I1205 07:06:18.242701  372737 cri.go:89] found id: "ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc"
	I1205 07:06:18.242709  372737 cri.go:89] found id: "f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	I1205 07:06:18.242713  372737 cri.go:89] found id: "64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a"
	I1205 07:06:18.242729  372737 cri.go:89] found id: ""
	I1205 07:06:18.242783  372737 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:18.260584  372737 out.go:203] 
	W1205 07:06:18.261879  372737 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:06:18.261904  372737 out.go:285] * 
	* 
	W1205 07:06:18.268160  372737 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:06:18.269542  372737 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-874709 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-874709
helpers_test.go:243: (dbg) docker inspect old-k8s-version-874709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	        "Created": "2025-12-05T07:04:05.274488478Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:05:18.972892784Z",
	            "FinishedAt": "2025-12-05T07:05:18.104927096Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hosts",
	        "LogPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5-json.log",
	        "Name": "/old-k8s-version-874709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-874709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-874709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	                "LowerDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-874709",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-874709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-874709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "097b82bec7921f41e31893d8b8dfd25ae0a1a92896c8c9df10dd7263fca31a02",
	            "SandboxKey": "/var/run/docker/netns/097b82bec792",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-874709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b675820a4e14e6d815ef976a01c5649e140b5ac4be761da7497f0b550155e220",
	                    "EndpointID": "2b6edf6e3703b0d62935cffd8b181237c7ef2403fde734e9139cca1f323d5d9e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d6:98:bf:32:88:bb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-874709",
	                        "e58ec92f2b17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709: exit status 2 (387.287484ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25: (1.468559334s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                                                                                                  │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                               │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                               │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:01.180353  369138 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:01.180586  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180595  369138 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:01.180598  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180785  369138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:01.181188  369138 out.go:368] Setting JSON to false
	I1205 07:06:01.182372  369138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6505,"bootTime":1764911856,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:01.182422  369138 start.go:143] virtualization: kvm guest
	I1205 07:06:01.183964  369138 out.go:179] * [default-k8s-diff-port-172186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:01.185424  369138 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:01.185435  369138 notify.go:221] Checking for updates...
	I1205 07:06:01.187226  369138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:01.188220  369138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:01.189317  369138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:01.190301  369138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:01.191442  369138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:01.192978  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:01.193475  369138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:01.217006  369138 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:01.217083  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.269057  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.259668248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.269161  369138 docker.go:319] overlay module found
	I1205 07:06:01.270726  369138 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:01.273527  369138 start.go:309] selected driver: docker
	I1205 07:06:01.273546  369138 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.273660  369138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:01.274285  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.328638  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.319808984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.328902  369138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:01.328935  369138 cni.go:84] Creating CNI manager for ""
	I1205 07:06:01.328984  369138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:01.329017  369138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.330498  369138 out.go:179] * Starting "default-k8s-diff-port-172186" primary control-plane node in "default-k8s-diff-port-172186" cluster
	I1205 07:06:01.331537  369138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:01.332633  369138 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:01.333495  369138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:01.333520  369138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:01.333527  369138 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:01.333590  369138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:01.333612  369138 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:01.333619  369138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:01.333694  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.352461  369138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:01.352477  369138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:01.352490  369138 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:01.352512  369138 start.go:360] acquireMachinesLock for default-k8s-diff-port-172186: {Name:mkc7b70f4fd2c66eec9f181ab0dc691b16be91dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:01.352565  369138 start.go:364] duration metric: took 31.412µs to acquireMachinesLock for "default-k8s-diff-port-172186"
	I1205 07:06:01.352581  369138 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:01.352586  369138 fix.go:54] fixHost starting: 
	I1205 07:06:01.352769  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.368837  369138 fix.go:112] recreateIfNeeded on default-k8s-diff-port-172186: state=Stopped err=<nil>
	W1205 07:06:01.368859  369138 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:05:59.098239  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:06:01.098851  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	I1205 07:06:02.598698  361350 pod_ready.go:94] pod "coredns-5dd5756b68-srvvk" is "Ready"
	I1205 07:06:02.598728  361350 pod_ready.go:86] duration metric: took 33.506059911s for pod "coredns-5dd5756b68-srvvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.601667  361350 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.606548  361350 pod_ready.go:94] pod "etcd-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.606569  361350 pod_ready.go:86] duration metric: took 4.878762ms for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.609599  361350 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.614289  361350 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.614308  361350 pod_ready.go:86] duration metric: took 4.692692ms for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.617295  361350 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.795595  361350 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.795632  361350 pod_ready.go:86] duration metric: took 178.308346ms for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.997254  361350 pod_ready.go:83] waiting for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.396528  361350 pod_ready.go:94] pod "kube-proxy-98jls" is "Ready"
	I1205 07:06:03.396554  361350 pod_ready.go:86] duration metric: took 399.27461ms for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.597674  361350 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:58.862201  366710 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:05:58.867008  366710 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:05:58.867995  366710 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:05:58.868017  366710 api_server.go:131] duration metric: took 1.006376467s to wait for apiserver health ...
	I1205 07:05:58.868026  366710 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:58.871519  366710 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:58.871555  366710 system_pods.go:61] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.871566  366710 system_pods.go:61] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.871579  366710 system_pods.go:61] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.871585  366710 system_pods.go:61] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.871593  366710 system_pods.go:61] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.871598  366710 system_pods.go:61] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.871609  366710 system_pods.go:61] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.871616  366710 system_pods.go:61] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.871628  366710 system_pods.go:74] duration metric: took 3.595932ms to wait for pod list to return data ...
	I1205 07:05:58.871641  366710 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:58.873971  366710 default_sa.go:45] found service account: "default"
	I1205 07:05:58.873989  366710 default_sa.go:55] duration metric: took 2.342026ms for default service account to be created ...
	I1205 07:05:58.873999  366710 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:58.876526  366710 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:58.876552  366710 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.876564  366710 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.876572  366710 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.876578  366710 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.876584  366710 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.876592  366710 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.876597  366710 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.876605  366710 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.876611  366710 system_pods.go:126] duration metric: took 2.607202ms to wait for k8s-apps to be running ...
	I1205 07:05:58.876620  366710 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:58.876654  366710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:58.889311  366710 system_svc.go:56] duration metric: took 12.685986ms WaitForService to wait for kubelet
	I1205 07:05:58.889358  366710 kubeadm.go:587] duration metric: took 3.2316491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:58.889379  366710 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:58.891693  366710 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:58.891712  366710 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:58.891725  366710 node_conditions.go:105] duration metric: took 2.341752ms to run NodePressure ...
	I1205 07:05:58.891735  366710 start.go:242] waiting for startup goroutines ...
	I1205 07:05:58.891745  366710 start.go:247] waiting for cluster config update ...
	I1205 07:05:58.891760  366710 start.go:256] writing updated cluster config ...
	I1205 07:05:58.891980  366710 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:58.895376  366710 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:58.898174  366710 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:00.903613  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:03.403874  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:03.996446  361350 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-874709" is "Ready"
	I1205 07:06:03.996477  361350 pod_ready.go:86] duration metric: took 398.777833ms for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.996491  361350 pod_ready.go:40] duration metric: took 34.907225297s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:04.054517  361350 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1205 07:06:04.057064  361350 out.go:203] 
	W1205 07:06:04.058523  361350 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1205 07:06:04.059711  361350 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1205 07:06:04.060978  361350 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-874709" cluster and "default" namespace by default
	I1205 07:06:01.370314  369138 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-172186" ...
	I1205 07:06:01.370393  369138 cli_runner.go:164] Run: docker start default-k8s-diff-port-172186
	I1205 07:06:01.617870  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.636485  369138 kic.go:430] container "default-k8s-diff-port-172186" state is running.
	I1205 07:06:01.636802  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:01.654671  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.654872  369138 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:01.654941  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:01.673701  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:01.673924  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:01.673936  369138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:01.674676  369138 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46964->127.0.0.1:33123: read: connection reset by peer
	I1205 07:06:04.821968  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:04.821994  369138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-172186"
	I1205 07:06:04.822076  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:04.844977  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:04.845221  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:04.845236  369138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-172186 && echo "default-k8s-diff-port-172186" | sudo tee /etc/hostname
	I1205 07:06:05.021790  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:05.021876  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.048047  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.048394  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.048426  369138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-172186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-172186/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-172186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:05.207090  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:05.207125  369138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:05.207167  369138 ubuntu.go:190] setting up certificates
	I1205 07:06:05.207177  369138 provision.go:84] configureAuth start
	I1205 07:06:05.207255  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:05.232395  369138 provision.go:143] copyHostCerts
	I1205 07:06:05.232460  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:05.232471  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:05.232555  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:05.232703  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:05.232719  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:05.232765  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:05.232861  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:05.232872  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:05.232911  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:05.232988  369138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-172186 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-172186 localhost minikube]
	I1205 07:06:05.364735  369138 provision.go:177] copyRemoteCerts
	I1205 07:06:05.364786  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:05.364817  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.388117  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:05.499381  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:05.522631  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 07:06:05.545521  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:05.568070  369138 provision.go:87] duration metric: took 360.875348ms to configureAuth
	I1205 07:06:05.568099  369138 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:05.568372  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:05.568548  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.590384  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.590652  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.590675  369138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:06:06.903874  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:06.903896  369138 machine.go:97] duration metric: took 5.249008974s to provisionDockerMachine
	I1205 07:06:06.903916  369138 start.go:293] postStartSetup for "default-k8s-diff-port-172186" (driver="docker")
	I1205 07:06:06.903928  369138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:06.903987  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:06.904029  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:06.925627  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.029099  369138 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:07.032724  369138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:07.032746  369138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:07.032759  369138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:07.032815  369138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:07.032888  369138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:07.033013  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:07.041901  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:07.061013  369138 start.go:296] duration metric: took 157.082278ms for postStartSetup
	I1205 07:06:07.061092  369138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:07.061159  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.082205  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.182483  369138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:07.187449  369138 fix.go:56] duration metric: took 5.834857369s for fixHost
	I1205 07:06:07.187479  369138 start.go:83] releasing machines lock for "default-k8s-diff-port-172186", held for 5.834903523s
	I1205 07:06:07.187536  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:07.207183  369138 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:07.207261  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.207265  369138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:07.207364  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.229035  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.229296  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.385648  369138 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:07.392589  369138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:07.430856  369138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:07.436189  369138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:07.436253  369138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:07.444842  369138 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:06:07.444862  369138 start.go:496] detecting cgroup driver to use...
	I1205 07:06:07.444893  369138 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:07.444951  369138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:07.460241  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:07.473695  369138 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:07.473762  369138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:07.489755  369138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:07.502411  369138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:07.588055  369138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:07.675270  369138 docker.go:234] disabling docker service ...
	I1205 07:06:07.675365  369138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:07.690468  369138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:07.703523  369138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:07.804032  369138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:07.886506  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:07.899154  369138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:07.913624  369138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:07.913693  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.922196  369138 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:07.922247  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.930564  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.938677  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.947127  369138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:07.954727  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.963475  369138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.971688  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.982358  369138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:07.991662  369138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:07.999059  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:08.095980  369138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:08.420298  369138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:08.420383  369138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:08.424303  369138 start.go:564] Will wait 60s for crictl version
	I1205 07:06:08.424382  369138 ssh_runner.go:195] Run: which crictl
	I1205 07:06:08.428123  369138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:08.452789  369138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:08.452861  369138 ssh_runner.go:195] Run: crio --version
	I1205 07:06:08.492736  369138 ssh_runner.go:195] Run: crio --version
	I1205 07:06:08.525519  369138 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1205 07:06:05.904238  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:08.403448  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:08.530209  369138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:08.549687  369138 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:08.553769  369138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:08.563884  369138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:08.564005  369138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:08.564046  369138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:08.595573  369138 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:08.595590  369138 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:06:08.595628  369138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:08.619710  369138 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:08.619728  369138 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:06:08.619735  369138 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1205 07:06:08.619861  369138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-172186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:08.619919  369138 ssh_runner.go:195] Run: crio config
	I1205 07:06:08.663749  369138 cni.go:84] Creating CNI manager for ""
	I1205 07:06:08.663775  369138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:08.663795  369138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:06:08.663827  369138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-172186 NodeName:default-k8s-diff-port-172186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:08.663978  369138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-172186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:08.664049  369138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:06:08.671837  369138 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:06:08.671891  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:08.679356  369138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 07:06:08.691563  369138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:06:08.703421  369138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1205 07:06:08.715827  369138 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:08.719126  369138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:08.728395  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:08.813134  369138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:08.837383  369138 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186 for IP: 192.168.94.2
	I1205 07:06:08.837410  369138 certs.go:195] generating shared ca certs ...
	I1205 07:06:08.837426  369138 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:08.837599  369138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:08.837654  369138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:08.837673  369138 certs.go:257] generating profile certs ...
	I1205 07:06:08.837785  369138 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/client.key
	I1205 07:06:08.837854  369138 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.key.83c70576
	I1205 07:06:08.837905  369138 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.key
	I1205 07:06:08.838051  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:08.838093  369138 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:08.838103  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:08.838137  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:08.838174  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:08.838208  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:08.838263  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:08.838899  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:08.856272  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:08.874469  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:08.893284  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:08.915960  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 07:06:08.934214  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:06:08.950394  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:08.966781  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:06:08.983164  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:08.999520  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:09.015937  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:09.033559  369138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:09.045273  369138 ssh_runner.go:195] Run: openssl version
	I1205 07:06:09.051115  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.058003  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:09.064725  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.068128  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.068173  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.106428  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:09.113687  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.121104  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:09.128303  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.131941  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.131987  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.165708  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:09.172574  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.179353  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:09.186638  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.190195  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.190251  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.224040  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:09.230828  369138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:09.234193  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:06:09.268487  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:06:09.301515  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:06:09.334177  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:06:09.379697  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:06:09.427803  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:06:09.485297  369138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:09.485420  369138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:09.485525  369138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:09.520393  369138 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:06:09.520417  369138 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:06:09.520423  369138 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:06:09.520428  369138 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:06:09.520432  369138 cri.go:89] found id: ""
	I1205 07:06:09.520479  369138 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:06:09.534965  369138 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:09Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:09.535034  369138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:09.545001  369138 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:06:09.545020  369138 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:06:09.545062  369138 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:06:09.553591  369138 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:06:09.554621  369138 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-172186" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:09.555353  369138 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-172186" cluster setting kubeconfig missing "default-k8s-diff-port-172186" context setting]
	I1205 07:06:09.556832  369138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.559016  369138 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:06:09.568009  369138 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1205 07:06:09.568049  369138 kubeadm.go:602] duration metric: took 23.022815ms to restartPrimaryControlPlane
	I1205 07:06:09.568059  369138 kubeadm.go:403] duration metric: took 82.77342ms to StartCluster
	I1205 07:06:09.568080  369138 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.568158  369138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:09.570193  369138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.570467  369138 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:09.570663  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:09.570629  369138 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:09.570743  369138 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570764  369138 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-172186"
	W1205 07:06:09.570772  369138 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:06:09.570790  369138 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570800  369138 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570807  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.570819  369138 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-172186"
	I1205 07:06:09.570823  369138 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-172186"
	W1205 07:06:09.570829  369138 addons.go:248] addon dashboard should already be in state true
	I1205 07:06:09.570869  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.571118  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.571276  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.571525  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.572720  369138 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:09.574174  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:09.601249  369138 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:06:09.601301  369138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:09.602496  369138 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:09.602524  369138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:09.602614  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.603532  369138 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:06:09.604392  369138 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-172186"
	I1205 07:06:09.604408  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1205 07:06:09.604414  369138 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:06:09.604419  369138 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:06:09.604440  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.604484  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.605017  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.641337  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.643133  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.643965  369138 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:09.643985  369138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:09.644041  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.668555  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.737475  369138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:09.750711  369138 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:06:09.766664  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:09.767545  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:06:09.767572  369138 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:06:09.785136  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:06:09.785154  369138 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:06:09.798169  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:09.805464  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:06:09.805487  369138 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:06:09.824092  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:06:09.824153  369138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:06:09.843896  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:06:09.843934  369138 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:06:09.861616  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:06:09.861637  369138 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:06:09.876693  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:06:09.876712  369138 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:06:09.890832  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:06:09.890848  369138 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:06:09.906231  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:09.906258  369138 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:06:09.920399  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:11.522589  369138 node_ready.go:49] node "default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:11.522618  369138 node_ready.go:38] duration metric: took 1.771873848s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:06:11.522633  369138 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:11.522681  369138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:12.014838  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.248140228s)
	I1205 07:06:12.014932  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.216729098s)
	I1205 07:06:12.015042  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.09461333s)
	I1205 07:06:12.015096  369138 api_server.go:72] duration metric: took 2.444598602s to wait for apiserver process to appear ...
	I1205 07:06:12.015116  369138 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:12.015187  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:12.016535  369138 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-172186 addons enable metrics-server
	
	I1205 07:06:12.019788  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:12.019807  369138 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:12.023173  369138 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1205 07:06:10.404234  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:12.902940  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:12.024135  369138 addons.go:530] duration metric: took 2.453513644s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:06:12.515923  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:12.520861  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:12.520889  369138 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:13.015284  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:13.019975  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1205 07:06:13.020990  369138 api_server.go:141] control plane version: v1.34.2
	I1205 07:06:13.021016  369138 api_server.go:131] duration metric: took 1.005842634s to wait for apiserver health ...
	I1205 07:06:13.021026  369138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:13.023666  369138 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:13.023702  369138 system_pods.go:61] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:13.023712  369138 system_pods.go:61] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:13.023721  369138 system_pods.go:61] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:06:13.023728  369138 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:13.023738  369138 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:13.023742  369138 system_pods.go:61] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:06:13.023747  369138 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:13.023754  369138 system_pods.go:61] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Running
	I1205 07:06:13.023760  369138 system_pods.go:74] duration metric: took 2.728175ms to wait for pod list to return data ...
	I1205 07:06:13.023770  369138 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:13.025735  369138 default_sa.go:45] found service account: "default"
	I1205 07:06:13.025754  369138 default_sa.go:55] duration metric: took 1.97857ms for default service account to be created ...
	I1205 07:06:13.025764  369138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:06:13.028200  369138 system_pods.go:86] 8 kube-system pods found
	I1205 07:06:13.028223  369138 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:13.028231  369138 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:13.028236  369138 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:06:13.028242  369138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:13.028248  369138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:13.028253  369138 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:06:13.028258  369138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:13.028262  369138 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Running
	I1205 07:06:13.028268  369138 system_pods.go:126] duration metric: took 2.498302ms to wait for k8s-apps to be running ...
	I1205 07:06:13.028277  369138 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:06:13.028333  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:13.040713  369138 system_svc.go:56] duration metric: took 12.430515ms WaitForService to wait for kubelet
	I1205 07:06:13.040732  369138 kubeadm.go:587] duration metric: took 3.470237015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:13.040746  369138 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:13.042771  369138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:13.042790  369138 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:13.042814  369138 node_conditions.go:105] duration metric: took 2.063513ms to run NodePressure ...
	I1205 07:06:13.042823  369138 start.go:242] waiting for startup goroutines ...
	I1205 07:06:13.042830  369138 start.go:247] waiting for cluster config update ...
	I1205 07:06:13.042839  369138 start.go:256] writing updated cluster config ...
	I1205 07:06:13.043057  369138 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:13.046776  369138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:13.050088  369138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:15.054020  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:14.903791  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:16.904837  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:05:47 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:47.106967285Z" level=info msg="Started container" PID=1763 containerID=ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper id=236a6e31-acd9-407c-9cdf-32e4b5e2a153 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62159d1713eef2a7aba0953fad4ebf207b5b35255c8c0a6cf684c79cf4e2c4b
	Dec 05 07:05:48 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:48.065734661Z" level=info msg="Removing container: 40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba" id=6a06d94a-ad7e-4fb5-b0b2-82134effa813 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:05:48 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:48.076685723Z" level=info msg="Removed container 40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=6a06d94a-ad7e-4fb5-b0b2-82134effa813 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.090731888Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=83f71281-80b3-4a1c-81df-dce5bad9bb44 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.091584215Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d6b0f75f-15b8-4038-b523-ccc3d08271aa name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.092476985Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9a06ffbb-0123-42e0-ab67-36cdd6a1be46 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.092696741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097505057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097691162Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ff82fa1671c22fd2d93d67169a16a05c1caca7b029a11886d3fb53bdd0356d14/merged/etc/passwd: no such file or directory"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097716015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff82fa1671c22fd2d93d67169a16a05c1caca7b029a11886d3fb53bdd0356d14/merged/etc/group: no such file or directory"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.098004501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.127585174Z" level=info msg="Created container 9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244: kube-system/storage-provisioner/storage-provisioner" id=9a06ffbb-0123-42e0-ab67-36cdd6a1be46 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.128030394Z" level=info msg="Starting container: 9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244" id=b1467f2d-edec-4395-8fc2-0f84696f03c2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.129638465Z" level=info msg="Started container" PID=1778 containerID=9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244 description=kube-system/storage-provisioner/storage-provisioner id=b1467f2d-edec-4395-8fc2-0f84696f03c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9403b42cf53968038c0583742a6622d795b5df21d9a239162cc9ab200b3e8e9
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.994356333Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9e350688-2cba-440b-8178-fb50f7af443d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.995317645Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=61bf287e-1350-4849-b5d9-35f31e9f2812 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.99638489Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=62baa06c-a8a6-444d-be68-7afdf0164744 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.996521771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.001849831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.002309849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.028105896Z" level=info msg="Created container f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=62baa06c-a8a6-444d-be68-7afdf0164744 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.028608919Z" level=info msg="Starting container: f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee" id=7449f139-779d-4a99-9694-17f595afe7e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.030105257Z" level=info msg="Started container" PID=1794 containerID=f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper id=7449f139-779d-4a99-9694-17f595afe7e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62159d1713eef2a7aba0953fad4ebf207b5b35255c8c0a6cf684c79cf4e2c4b
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.101501848Z" level=info msg="Removing container: ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b" id=a7914892-5ca7-4575-ad13-ee4b23056cc5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.113214074Z" level=info msg="Removed container ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=a7914892-5ca7-4575-ad13-ee4b23056cc5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f2f2a155f4693       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   b62159d1713ee       dashboard-metrics-scraper-5f989dc9cf-vhgmg       kubernetes-dashboard
	9ba86f612b662       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   b9403b42cf539       storage-provisioner                              kube-system
	64c85e718ac4a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   74c7ae571a78b       kubernetes-dashboard-8694d4445c-xn6nb            kubernetes-dashboard
	9fbd3a07129cf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   1a60cf99343d8       busybox                                          default
	52173fac10a5e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   d729cd909fac8       coredns-5dd5756b68-srvvk                         kube-system
	87a1771d8b8eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   b9403b42cf539       storage-provisioner                              kube-system
	189089a1551ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   55708cdd43b71       kindnet-f9lmb                                    kube-system
	d6ff518de54f6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   bf876763196e2       kube-proxy-98jls                                 kube-system
	a5a9622dfd7dc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   3a2ef019cb23c       kube-apiserver-old-k8s-version-874709            kube-system
	6be13235867d4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   1adbd89eeb21e       kube-scheduler-old-k8s-version-874709            kube-system
	7c7e915cc7bec       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   41efa262abe73       etcd-old-k8s-version-874709                      kube-system
	ffe21b4df5d3a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   341a1abe92e7d       kube-controller-manager-old-k8s-version-874709   kube-system
	
	
	==> coredns [52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49596 - 8409 "HINFO IN 2101740535183586278.54168126310120430. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.099426689s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-874709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-874709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=old-k8s-version-874709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_04_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-874709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-874709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                5af588f9-e276-46d0-bc7e-d873d5f0f898
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-srvvk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-874709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-f9lmb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-874709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-874709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-98jls                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-874709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vhgmg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xn6nb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-874709 event: Registered Node old-k8s-version-874709 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-874709 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-874709 event: Registered Node old-k8s-version-874709 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08] <==
	{"level":"info","ts":"2025-12-05T07:05:25.554788Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-05T07:05:25.554798Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-05T07:05:25.554957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-05T07:05:25.555124Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-05T07:05:25.555291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:05:25.555346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:05:25.557077Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-05T07:05:25.557842Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-05T07:05:25.558617Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-05T07:05:25.558151Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-05T07:05:25.558488Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-05T07:05:26.646404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.647987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-874709 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-05T07:05:26.647993Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:05:26.64808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:05:26.648188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-05T07:05:26.648213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-05T07:05:26.649557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-05T07:05:26.649662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:06:19 up  1:48,  0 user,  load average: 3.79, 3.33, 2.23
	Linux old-k8s-version-874709 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3] <==
	I1205 07:05:28.603267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:28.603579       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1205 07:05:28.603777       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:28.603801       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:28.603828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:28.805103       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:28.805259       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:28.805277       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:28.805449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:29.105734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:29.105765       1 metrics.go:72] Registering metrics
	I1205 07:05:29.105849       1 controller.go:711] "Syncing nftables rules"
	I1205 07:05:38.805530       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:38.805571       1 main.go:301] handling current node
	I1205 07:05:48.805543       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:48.805591       1 main.go:301] handling current node
	I1205 07:05:58.805282       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:58.805311       1 main.go:301] handling current node
	I1205 07:06:08.805247       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:06:08.805286       1 main.go:301] handling current node
	I1205 07:06:18.811459       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:06:18.811501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d] <==
	I1205 07:05:27.569430       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1205 07:05:27.616870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:27.652362       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 07:05:27.652402       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 07:05:27.652406       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 07:05:27.652498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:05:27.652369       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1205 07:05:27.652882       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 07:05:27.652377       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 07:05:27.669951       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 07:05:27.669988       1 aggregator.go:166] initial CRD sync complete...
	I1205 07:05:27.669994       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 07:05:27.669999       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:05:27.670005       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:05:28.445173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1205 07:05:28.474134       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 07:05:28.489417       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:28.497814       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:28.506206       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 07:05:28.537958       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.150.131"}
	I1205 07:05:28.548917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:05:28.551054       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.11.26"}
	I1205 07:05:40.265508       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 07:05:40.315515       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:05:40.416213       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc] <==
	I1205 07:05:40.370379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.181µs"
	I1205 07:05:40.418402       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1205 07:05:40.419719       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1205 07:05:40.426812       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xn6nb"
	I1205 07:05:40.426931       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	I1205 07:05:40.431287       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:05:40.431409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.27221ms"
	I1205 07:05:40.433151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.870596ms"
	I1205 07:05:40.443033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.540799ms"
	I1205 07:05:40.443101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.943µs"
	I1205 07:05:40.443116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.808304ms"
	I1205 07:05:40.443150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.957µs"
	I1205 07:05:40.448622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.155µs"
	I1205 07:05:40.456258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.328µs"
	I1205 07:05:40.481672       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:05:40.481696       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1205 07:05:45.072781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.238781ms"
	I1205 07:05:45.073041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.815µs"
	I1205 07:05:47.071484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.123µs"
	I1205 07:05:48.077002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.273µs"
	I1205 07:05:49.081724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.623µs"
	I1205 07:06:02.110934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.876µs"
	I1205 07:06:02.356973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.109265ms"
	I1205 07:06:02.357308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.863µs"
	I1205 07:06:10.747162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.887µs"
	
	
	==> kube-proxy [d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0] <==
	I1205 07:05:28.410103       1 server_others.go:69] "Using iptables proxy"
	I1205 07:05:28.419095       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1205 07:05:28.437091       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:28.439375       1 server_others.go:152] "Using iptables Proxier"
	I1205 07:05:28.439404       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 07:05:28.439414       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 07:05:28.439447       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 07:05:28.439666       1 server.go:846] "Version info" version="v1.28.0"
	I1205 07:05:28.439683       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:28.441083       1 config.go:188] "Starting service config controller"
	I1205 07:05:28.441120       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 07:05:28.441155       1 config.go:97] "Starting endpoint slice config controller"
	I1205 07:05:28.441162       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 07:05:28.441388       1 config.go:315] "Starting node config controller"
	I1205 07:05:28.441441       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 07:05:28.541641       1 shared_informer.go:318] Caches are synced for node config
	I1205 07:05:28.541675       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 07:05:28.541692       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b] <==
	I1205 07:05:26.056981       1 serving.go:348] Generated self-signed cert in-memory
	W1205 07:05:27.603881       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:05:27.603921       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:05:27.603936       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:05:27.603946       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:05:27.624256       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1205 07:05:27.624291       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:27.626030       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:05:27.626063       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 07:05:27.627221       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1205 07:05:27.627298       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 07:05:27.727198       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474469     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxpp\" (UniqueName: \"kubernetes.io/projected/fd771a55-07e2-4e40-8419-550f7c0bfe62-kube-api-access-xmxpp\") pod \"kubernetes-dashboard-8694d4445c-xn6nb\" (UID: \"fd771a55-07e2-4e40-8419-550f7c0bfe62\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474517     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd771a55-07e2-4e40-8419-550f7c0bfe62-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xn6nb\" (UID: \"fd771a55-07e2-4e40-8419-550f7c0bfe62\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474546     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4105d97-2e10-47da-ad49-7e8b8c808636-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vhgmg\" (UID: \"a4105d97-2e10-47da-ad49-7e8b8c808636\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474572     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txpl\" (UniqueName: \"kubernetes.io/projected/a4105d97-2e10-47da-ad49-7e8b8c808636-kube-api-access-5txpl\") pod \"dashboard-metrics-scraper-5f989dc9cf-vhgmg\" (UID: \"a4105d97-2e10-47da-ad49-7e8b8c808636\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	Dec 05 07:05:45 old-k8s-version-874709 kubelet[733]: I1205 07:05:45.066641     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb" podStartSLOduration=1.782376309 podCreationTimestamp="2025-12-05 07:05:40 +0000 UTC" firstStartedPulling="2025-12-05 07:05:40.756568773 +0000 UTC m=+15.848699287" lastFinishedPulling="2025-12-05 07:05:44.040773913 +0000 UTC m=+19.132904433" observedRunningTime="2025-12-05 07:05:45.066316261 +0000 UTC m=+20.158446797" watchObservedRunningTime="2025-12-05 07:05:45.066581455 +0000 UTC m=+20.158711974"
	Dec 05 07:05:47 old-k8s-version-874709 kubelet[733]: I1205 07:05:47.060422     733 scope.go:117] "RemoveContainer" containerID="40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: I1205 07:05:48.064286     733 scope.go:117] "RemoveContainer" containerID="40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: I1205 07:05:48.064497     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: E1205 07:05:48.064884     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:49 old-k8s-version-874709 kubelet[733]: I1205 07:05:49.068562     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:49 old-k8s-version-874709 kubelet[733]: E1205 07:05:49.068936     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:50 old-k8s-version-874709 kubelet[733]: I1205 07:05:50.735651     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:50 old-k8s-version-874709 kubelet[733]: E1205 07:05:50.735911     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:59 old-k8s-version-874709 kubelet[733]: I1205 07:05:59.090315     733 scope.go:117] "RemoveContainer" containerID="87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	Dec 05 07:06:01 old-k8s-version-874709 kubelet[733]: I1205 07:06:01.993761     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: I1205 07:06:02.100280     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: I1205 07:06:02.100514     733 scope.go:117] "RemoveContainer" containerID="f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: E1205 07:06:02.100882     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:06:10 old-k8s-version-874709 kubelet[733]: I1205 07:06:10.735878     733 scope.go:117] "RemoveContainer" containerID="f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	Dec 05 07:06:10 old-k8s-version-874709 kubelet[733]: E1205 07:06:10.736303     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:06:16 old-k8s-version-874709 kubelet[733]: I1205 07:06:16.245390     733 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: kubelet.service: Consumed 1.385s CPU time.
	
	
	==> kubernetes-dashboard [64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a] <==
	2025/12/05 07:05:44 Starting overwatch
	2025/12/05 07:05:44 Using namespace: kubernetes-dashboard
	2025/12/05 07:05:44 Using in-cluster config to connect to apiserver
	2025/12/05 07:05:44 Using secret token for csrf signing
	2025/12/05 07:05:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:05:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:05:44 Successful initial request to the apiserver, version: v1.28.0
	2025/12/05 07:05:44 Generating JWE encryption key
	2025/12/05 07:05:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:05:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:05:44 Initializing JWE encryption key from synchronized object
	2025/12/05 07:05:44 Creating in-cluster Sidecar client
	2025/12/05 07:05:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:05:44 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c] <==
	I1205 07:05:28.380539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:05:58.382860       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244] <==
	I1205 07:05:59.141158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:05:59.147789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:05:59.147841       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 07:06:16.541989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:06:16.542152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5!
	I1205 07:06:16.542129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064af1fe-2240-4284-9f0a-716d2b949fbe", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5 became leader
	I1205 07:06:16.642412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874709 -n old-k8s-version-874709: exit status 2 (369.792747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-874709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-874709
helpers_test.go:243: (dbg) docker inspect old-k8s-version-874709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	        "Created": "2025-12-05T07:04:05.274488478Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:05:18.972892784Z",
	            "FinishedAt": "2025-12-05T07:05:18.104927096Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/hosts",
	        "LogPath": "/var/lib/docker/containers/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5/e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5-json.log",
	        "Name": "/old-k8s-version-874709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-874709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-874709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e58ec92f2b17639ffc9e32bf68f7ed2ec4a806ecde12ff8cb43196319ab3afc5",
	                "LowerDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4161b7303d4725e6c6df0d57d31ccb00f5d94847e5ccf38d2c46fb09eea2be80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-874709",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-874709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-874709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-874709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "097b82bec7921f41e31893d8b8dfd25ae0a1a92896c8c9df10dd7263fca31a02",
	            "SandboxKey": "/var/run/docker/netns/097b82bec792",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-874709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b675820a4e14e6d815ef976a01c5649e140b5ac4be761da7497f0b550155e220",
	                    "EndpointID": "2b6edf6e3703b0d62935cffd8b181237c7ef2403fde734e9139cca1f323d5d9e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d6:98:bf:32:88:bb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-874709",
	                        "e58ec92f2b17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709: exit status 2 (358.951601ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-874709 logs -n 25: (1.057701181s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-397607 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo containerd config dump                                                                                                                                                                                                  │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ ssh     │ -p bridge-397607 sudo crio config                                                                                                                                                                                                             │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p bridge-397607                                                                                                                                                                                                                              │ bridge-397607                │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                               │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                               │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:01.180353  369138 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:01.180586  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180595  369138 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:01.180598  369138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:01.180785  369138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:01.181188  369138 out.go:368] Setting JSON to false
	I1205 07:06:01.182372  369138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6505,"bootTime":1764911856,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:01.182422  369138 start.go:143] virtualization: kvm guest
	I1205 07:06:01.183964  369138 out.go:179] * [default-k8s-diff-port-172186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:01.185424  369138 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:01.185435  369138 notify.go:221] Checking for updates...
	I1205 07:06:01.187226  369138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:01.188220  369138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:01.189317  369138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:01.190301  369138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:01.191442  369138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:01.192978  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:01.193475  369138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:01.217006  369138 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:01.217083  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.269057  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.259668248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.269161  369138 docker.go:319] overlay module found
	I1205 07:06:01.270726  369138 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:01.273527  369138 start.go:309] selected driver: docker
	I1205 07:06:01.273546  369138 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.273660  369138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:01.274285  369138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:01.328638  369138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-05 07:06:01.319808984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:01.328902  369138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:01.328935  369138 cni.go:84] Creating CNI manager for ""
	I1205 07:06:01.328984  369138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:01.329017  369138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:01.330498  369138 out.go:179] * Starting "default-k8s-diff-port-172186" primary control-plane node in "default-k8s-diff-port-172186" cluster
	I1205 07:06:01.331537  369138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:01.332633  369138 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:01.333495  369138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:01.333520  369138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:01.333527  369138 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:01.333590  369138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:01.333612  369138 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:01.333619  369138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:01.333694  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.352461  369138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:01.352477  369138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:01.352490  369138 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:01.352512  369138 start.go:360] acquireMachinesLock for default-k8s-diff-port-172186: {Name:mkc7b70f4fd2c66eec9f181ab0dc691b16be91dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:01.352565  369138 start.go:364] duration metric: took 31.412µs to acquireMachinesLock for "default-k8s-diff-port-172186"
	I1205 07:06:01.352581  369138 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:01.352586  369138 fix.go:54] fixHost starting: 
	I1205 07:06:01.352769  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.368837  369138 fix.go:112] recreateIfNeeded on default-k8s-diff-port-172186: state=Stopped err=<nil>
	W1205 07:06:01.368859  369138 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:05:59.098239  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	W1205 07:06:01.098851  361350 pod_ready.go:104] pod "coredns-5dd5756b68-srvvk" is not "Ready", error: <nil>
	I1205 07:06:02.598698  361350 pod_ready.go:94] pod "coredns-5dd5756b68-srvvk" is "Ready"
	I1205 07:06:02.598728  361350 pod_ready.go:86] duration metric: took 33.506059911s for pod "coredns-5dd5756b68-srvvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.601667  361350 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.606548  361350 pod_ready.go:94] pod "etcd-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.606569  361350 pod_ready.go:86] duration metric: took 4.878762ms for pod "etcd-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.609599  361350 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.614289  361350 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.614308  361350 pod_ready.go:86] duration metric: took 4.692692ms for pod "kube-apiserver-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.617295  361350 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.795595  361350 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-874709" is "Ready"
	I1205 07:06:02.795632  361350 pod_ready.go:86] duration metric: took 178.308346ms for pod "kube-controller-manager-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:02.997254  361350 pod_ready.go:83] waiting for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.396528  361350 pod_ready.go:94] pod "kube-proxy-98jls" is "Ready"
	I1205 07:06:03.396554  361350 pod_ready.go:86] duration metric: took 399.27461ms for pod "kube-proxy-98jls" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.597674  361350 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:05:58.862201  366710 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:05:58.867008  366710 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:05:58.867995  366710 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:05:58.868017  366710 api_server.go:131] duration metric: took 1.006376467s to wait for apiserver health ...
	I1205 07:05:58.868026  366710 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:05:58.871519  366710 system_pods.go:59] 8 kube-system pods found
	I1205 07:05:58.871555  366710 system_pods.go:61] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.871566  366710 system_pods.go:61] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.871579  366710 system_pods.go:61] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.871585  366710 system_pods.go:61] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.871593  366710 system_pods.go:61] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.871598  366710 system_pods.go:61] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.871609  366710 system_pods.go:61] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.871616  366710 system_pods.go:61] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.871628  366710 system_pods.go:74] duration metric: took 3.595932ms to wait for pod list to return data ...
	I1205 07:05:58.871641  366710 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:05:58.873971  366710 default_sa.go:45] found service account: "default"
	I1205 07:05:58.873989  366710 default_sa.go:55] duration metric: took 2.342026ms for default service account to be created ...
	I1205 07:05:58.873999  366710 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:05:58.876526  366710 system_pods.go:86] 8 kube-system pods found
	I1205 07:05:58.876552  366710 system_pods.go:89] "coredns-7d764666f9-bvbhf" [898995af-4e62-44f5-91b9-f7a35befdcb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:05:58.876564  366710 system_pods.go:89] "etcd-no-preload-008839" [79f76484-3a06-4028-ae52-0bea2752b835] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:05:58.876572  366710 system_pods.go:89] "kindnet-k65q9" [60bf9fdc-755d-4308-bf58-4a3d3459eddb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:05:58.876578  366710 system_pods.go:89] "kube-apiserver-no-preload-008839" [a2155807-c820-4d71-b174-373cd16c2a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:05:58.876584  366710 system_pods.go:89] "kube-controller-manager-no-preload-008839" [dfb6931b-625a-4bdd-a4ab-e673f6fe1f27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:05:58.876592  366710 system_pods.go:89] "kube-proxy-s9zn2" [73b9d6c5-f629-4c51-943c-fd18a048eae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:05:58.876597  366710 system_pods.go:89] "kube-scheduler-no-preload-008839" [6a8251b4-9ab1-45c1-97f2-51680ae7c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:05:58.876605  366710 system_pods.go:89] "storage-provisioner" [45db8452-3833-4917-a660-183d0a4bcac4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:05:58.876611  366710 system_pods.go:126] duration metric: took 2.607202ms to wait for k8s-apps to be running ...
	I1205 07:05:58.876620  366710 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:05:58.876654  366710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:05:58.889311  366710 system_svc.go:56] duration metric: took 12.685986ms WaitForService to wait for kubelet
	I1205 07:05:58.889358  366710 kubeadm.go:587] duration metric: took 3.2316491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:05:58.889379  366710 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:05:58.891693  366710 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:05:58.891712  366710 node_conditions.go:123] node cpu capacity is 8
	I1205 07:05:58.891725  366710 node_conditions.go:105] duration metric: took 2.341752ms to run NodePressure ...
	I1205 07:05:58.891735  366710 start.go:242] waiting for startup goroutines ...
	I1205 07:05:58.891745  366710 start.go:247] waiting for cluster config update ...
	I1205 07:05:58.891760  366710 start.go:256] writing updated cluster config ...
	I1205 07:05:58.891980  366710 ssh_runner.go:195] Run: rm -f paused
	I1205 07:05:58.895376  366710 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:05:58.898174  366710 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:00.903613  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:03.403874  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:03.996446  361350 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-874709" is "Ready"
	I1205 07:06:03.996477  361350 pod_ready.go:86] duration metric: took 398.777833ms for pod "kube-scheduler-old-k8s-version-874709" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:03.996491  361350 pod_ready.go:40] duration metric: took 34.907225297s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:04.054517  361350 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1205 07:06:04.057064  361350 out.go:203] 
	W1205 07:06:04.058523  361350 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1205 07:06:04.059711  361350 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1205 07:06:04.060978  361350 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-874709" cluster and "default" namespace by default
	I1205 07:06:01.370314  369138 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-172186" ...
	I1205 07:06:01.370393  369138 cli_runner.go:164] Run: docker start default-k8s-diff-port-172186
	I1205 07:06:01.617870  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:01.636485  369138 kic.go:430] container "default-k8s-diff-port-172186" state is running.
	I1205 07:06:01.636802  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:01.654671  369138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/config.json ...
	I1205 07:06:01.654872  369138 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:01.654941  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:01.673701  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:01.673924  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:01.673936  369138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:01.674676  369138 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46964->127.0.0.1:33123: read: connection reset by peer
	I1205 07:06:04.821968  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:04.821994  369138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-172186"
	I1205 07:06:04.822076  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:04.844977  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:04.845221  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:04.845236  369138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-172186 && echo "default-k8s-diff-port-172186" | sudo tee /etc/hostname
	I1205 07:06:05.021790  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172186
	
	I1205 07:06:05.021876  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.048047  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.048394  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.048426  369138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-172186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-172186/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-172186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:05.207090  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:05.207125  369138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:05.207167  369138 ubuntu.go:190] setting up certificates
	I1205 07:06:05.207177  369138 provision.go:84] configureAuth start
	I1205 07:06:05.207255  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:05.232395  369138 provision.go:143] copyHostCerts
	I1205 07:06:05.232460  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:05.232471  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:05.232555  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:05.232703  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:05.232719  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:05.232765  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:05.232861  369138 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:05.232872  369138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:05.232911  369138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:05.232988  369138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-172186 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-172186 localhost minikube]
	I1205 07:06:05.364735  369138 provision.go:177] copyRemoteCerts
	I1205 07:06:05.364786  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:05.364817  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.388117  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:05.499381  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:05.522631  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 07:06:05.545521  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:05.568070  369138 provision.go:87] duration metric: took 360.875348ms to configureAuth
	I1205 07:06:05.568099  369138 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:05.568372  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:05.568548  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:05.590384  369138 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:05.590652  369138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1205 07:06:05.590675  369138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:06:06.903874  369138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:06.903896  369138 machine.go:97] duration metric: took 5.249008974s to provisionDockerMachine
	I1205 07:06:06.903916  369138 start.go:293] postStartSetup for "default-k8s-diff-port-172186" (driver="docker")
	I1205 07:06:06.903928  369138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:06.903987  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:06.904029  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:06.925627  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.029099  369138 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:07.032724  369138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:07.032746  369138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:07.032759  369138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:07.032815  369138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:07.032888  369138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:07.033013  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:07.041901  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:07.061013  369138 start.go:296] duration metric: took 157.082278ms for postStartSetup
	I1205 07:06:07.061092  369138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:07.061159  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.082205  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.182483  369138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:07.187449  369138 fix.go:56] duration metric: took 5.834857369s for fixHost
	I1205 07:06:07.187479  369138 start.go:83] releasing machines lock for "default-k8s-diff-port-172186", held for 5.834903523s
	I1205 07:06:07.187536  369138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172186
	I1205 07:06:07.207183  369138 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:07.207261  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.207265  369138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:07.207364  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:07.229035  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.229296  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:07.385648  369138 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:07.392589  369138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:07.430856  369138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:07.436189  369138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:07.436253  369138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:07.444842  369138 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:06:07.444862  369138 start.go:496] detecting cgroup driver to use...
	I1205 07:06:07.444893  369138 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:07.444951  369138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:07.460241  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:07.473695  369138 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:07.473762  369138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:07.489755  369138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:07.502411  369138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:07.588055  369138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:07.675270  369138 docker.go:234] disabling docker service ...
	I1205 07:06:07.675365  369138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:07.690468  369138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:07.703523  369138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:07.804032  369138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:07.886506  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:07.899154  369138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:07.913624  369138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:07.913693  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.922196  369138 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:07.922247  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.930564  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.938677  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.947127  369138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:07.954727  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.963475  369138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.971688  369138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:07.982358  369138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:07.991662  369138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:07.999059  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:08.095980  369138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:08.420298  369138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:08.420383  369138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:08.424303  369138 start.go:564] Will wait 60s for crictl version
	I1205 07:06:08.424382  369138 ssh_runner.go:195] Run: which crictl
	I1205 07:06:08.428123  369138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:08.452789  369138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:08.452861  369138 ssh_runner.go:195] Run: crio --version
	I1205 07:06:08.492736  369138 ssh_runner.go:195] Run: crio --version
	I1205 07:06:08.525519  369138 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	W1205 07:06:05.904238  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:08.403448  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:08.530209  369138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:08.549687  369138 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:08.553769  369138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:08.563884  369138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:08.564005  369138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:08.564046  369138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:08.595573  369138 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:08.595590  369138 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:06:08.595628  369138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:08.619710  369138 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:08.619728  369138 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:06:08.619735  369138 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1205 07:06:08.619861  369138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-172186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:08.619919  369138 ssh_runner.go:195] Run: crio config
	I1205 07:06:08.663749  369138 cni.go:84] Creating CNI manager for ""
	I1205 07:06:08.663775  369138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:08.663795  369138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:06:08.663827  369138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-172186 NodeName:default-k8s-diff-port-172186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:08.663978  369138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-172186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:08.664049  369138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:06:08.671837  369138 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:06:08.671891  369138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:08.679356  369138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 07:06:08.691563  369138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:06:08.703421  369138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1205 07:06:08.715827  369138 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:08.719126  369138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:08.728395  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:08.813134  369138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:08.837383  369138 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186 for IP: 192.168.94.2
	I1205 07:06:08.837410  369138 certs.go:195] generating shared ca certs ...
	I1205 07:06:08.837426  369138 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:08.837599  369138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:08.837654  369138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:08.837673  369138 certs.go:257] generating profile certs ...
	I1205 07:06:08.837785  369138 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/client.key
	I1205 07:06:08.837854  369138 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.key.83c70576
	I1205 07:06:08.837905  369138 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.key
	I1205 07:06:08.838051  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:08.838093  369138 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:08.838103  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:08.838137  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:08.838174  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:08.838208  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:08.838263  369138 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:08.838899  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:08.856272  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:08.874469  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:08.893284  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:08.915960  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 07:06:08.934214  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:06:08.950394  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:08.966781  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/default-k8s-diff-port-172186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:06:08.983164  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:08.999520  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:09.015937  369138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:09.033559  369138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:09.045273  369138 ssh_runner.go:195] Run: openssl version
	I1205 07:06:09.051115  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.058003  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:09.064725  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.068128  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.068173  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:09.106428  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:09.113687  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.121104  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:09.128303  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.131941  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.131987  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:09.165708  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:09.172574  369138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.179353  369138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:09.186638  369138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.190195  369138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.190251  369138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:09.224040  369138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:09.230828  369138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:09.234193  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:06:09.268487  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:06:09.301515  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:06:09.334177  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:06:09.379697  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:06:09.427803  369138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:06:09.485297  369138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-172186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-172186 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:09.485420  369138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:09.485525  369138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:09.520393  369138 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:06:09.520417  369138 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:06:09.520423  369138 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:06:09.520428  369138 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:06:09.520432  369138 cri.go:89] found id: ""
	I1205 07:06:09.520479  369138 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:06:09.534965  369138 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:09Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:09.535034  369138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:09.545001  369138 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:06:09.545020  369138 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:06:09.545062  369138 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:06:09.553591  369138 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:06:09.554621  369138 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-172186" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:09.555353  369138 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-172186" cluster setting kubeconfig missing "default-k8s-diff-port-172186" context setting]
	I1205 07:06:09.556832  369138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.559016  369138 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:06:09.568009  369138 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1205 07:06:09.568049  369138 kubeadm.go:602] duration metric: took 23.022815ms to restartPrimaryControlPlane
	I1205 07:06:09.568059  369138 kubeadm.go:403] duration metric: took 82.77342ms to StartCluster
	I1205 07:06:09.568080  369138 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.568158  369138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:09.570193  369138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:09.570467  369138 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:09.570663  369138 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:09.570629  369138 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:09.570743  369138 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570764  369138 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-172186"
	W1205 07:06:09.570772  369138 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:06:09.570790  369138 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570800  369138 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-172186"
	I1205 07:06:09.570807  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.570819  369138 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-172186"
	I1205 07:06:09.570823  369138 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-172186"
	W1205 07:06:09.570829  369138 addons.go:248] addon dashboard should already be in state true
	I1205 07:06:09.570869  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.571118  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.571276  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.571525  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.572720  369138 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:09.574174  369138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:09.601249  369138 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:06:09.601301  369138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:09.602496  369138 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:09.602524  369138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:09.602614  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.603532  369138 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:06:09.604392  369138 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-172186"
	I1205 07:06:09.604408  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1205 07:06:09.604414  369138 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:06:09.604419  369138 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:06:09.604440  369138 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:06:09.604484  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.605017  369138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:06:09.641337  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.643133  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.643965  369138 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:09.643985  369138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:09.644041  369138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:06:09.668555  369138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:06:09.737475  369138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:09.750711  369138 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:06:09.766664  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:09.767545  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:06:09.767572  369138 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:06:09.785136  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:06:09.785154  369138 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:06:09.798169  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:09.805464  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:06:09.805487  369138 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:06:09.824092  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:06:09.824153  369138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:06:09.843896  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:06:09.843934  369138 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:06:09.861616  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:06:09.861637  369138 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:06:09.876693  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:06:09.876712  369138 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:06:09.890832  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:06:09.890848  369138 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:06:09.906231  369138 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:09.906258  369138 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:06:09.920399  369138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:11.522589  369138 node_ready.go:49] node "default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:11.522618  369138 node_ready.go:38] duration metric: took 1.771873848s for node "default-k8s-diff-port-172186" to be "Ready" ...
	I1205 07:06:11.522633  369138 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:11.522681  369138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:12.014838  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.248140228s)
	I1205 07:06:12.014932  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.216729098s)
	I1205 07:06:12.015042  369138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.09461333s)
	I1205 07:06:12.015096  369138 api_server.go:72] duration metric: took 2.444598602s to wait for apiserver process to appear ...
	I1205 07:06:12.015116  369138 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:12.015187  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:12.016535  369138 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-172186 addons enable metrics-server
	
	I1205 07:06:12.019788  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:12.019807  369138 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:12.023173  369138 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1205 07:06:10.404234  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:12.902940  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:12.024135  369138 addons.go:530] duration metric: took 2.453513644s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:06:12.515923  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:12.520861  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:12.520889  369138 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:13.015284  369138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1205 07:06:13.019975  369138 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1205 07:06:13.020990  369138 api_server.go:141] control plane version: v1.34.2
	I1205 07:06:13.021016  369138 api_server.go:131] duration metric: took 1.005842634s to wait for apiserver health ...
	I1205 07:06:13.021026  369138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:13.023666  369138 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:13.023702  369138 system_pods.go:61] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:13.023712  369138 system_pods.go:61] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:13.023721  369138 system_pods.go:61] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:06:13.023728  369138 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:13.023738  369138 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:13.023742  369138 system_pods.go:61] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:06:13.023747  369138 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:13.023754  369138 system_pods.go:61] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Running
	I1205 07:06:13.023760  369138 system_pods.go:74] duration metric: took 2.728175ms to wait for pod list to return data ...
	I1205 07:06:13.023770  369138 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:13.025735  369138 default_sa.go:45] found service account: "default"
	I1205 07:06:13.025754  369138 default_sa.go:55] duration metric: took 1.97857ms for default service account to be created ...
	I1205 07:06:13.025764  369138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:06:13.028200  369138 system_pods.go:86] 8 kube-system pods found
	I1205 07:06:13.028223  369138 system_pods.go:89] "coredns-66bc5c9577-lzlm8" [ee60b2ad-840a-442d-9475-85e27048c452] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:13.028231  369138 system_pods.go:89] "etcd-default-k8s-diff-port-172186" [f165837d-edeb-4226-920b-b23d2ca9bf68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:13.028236  369138 system_pods.go:89] "kindnet-w2mzg" [3de2accc-6a87-4b4c-920d-74d5b5058c8e] Running
	I1205 07:06:13.028242  369138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-172186" [f0c01c8a-a8dd-4883-9b95-1c85dddc33d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:13.028248  369138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-172186" [74cc489e-2a21-4ab1-b8a3-b2bfca1c58ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:13.028253  369138 system_pods.go:89] "kube-proxy-fpss6" [9c1a939e-c7e6-4202-bffa-374ace420fd7] Running
	I1205 07:06:13.028258  369138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-172186" [e0764d08-18fe-47c0-b6b1-648c2c6fb1db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:13.028262  369138 system_pods.go:89] "storage-provisioner" [cf31286d-bf29-4883-828c-4e9aee83201f] Running
	I1205 07:06:13.028268  369138 system_pods.go:126] duration metric: took 2.498302ms to wait for k8s-apps to be running ...
	I1205 07:06:13.028277  369138 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:06:13.028333  369138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:13.040713  369138 system_svc.go:56] duration metric: took 12.430515ms WaitForService to wait for kubelet
	I1205 07:06:13.040732  369138 kubeadm.go:587] duration metric: took 3.470237015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:13.040746  369138 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:13.042771  369138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:13.042790  369138 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:13.042814  369138 node_conditions.go:105] duration metric: took 2.063513ms to run NodePressure ...
	I1205 07:06:13.042823  369138 start.go:242] waiting for startup goroutines ...
	I1205 07:06:13.042830  369138 start.go:247] waiting for cluster config update ...
	I1205 07:06:13.042839  369138 start.go:256] writing updated cluster config ...
	I1205 07:06:13.043057  369138 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:13.046776  369138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:13.050088  369138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:15.054020  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:14.903791  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:16.904837  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:05:47 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:47.106967285Z" level=info msg="Started container" PID=1763 containerID=ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper id=236a6e31-acd9-407c-9cdf-32e4b5e2a153 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62159d1713eef2a7aba0953fad4ebf207b5b35255c8c0a6cf684c79cf4e2c4b
	Dec 05 07:05:48 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:48.065734661Z" level=info msg="Removing container: 40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba" id=6a06d94a-ad7e-4fb5-b0b2-82134effa813 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:05:48 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:48.076685723Z" level=info msg="Removed container 40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=6a06d94a-ad7e-4fb5-b0b2-82134effa813 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.090731888Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=83f71281-80b3-4a1c-81df-dce5bad9bb44 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.091584215Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d6b0f75f-15b8-4038-b523-ccc3d08271aa name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.092476985Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9a06ffbb-0123-42e0-ab67-36cdd6a1be46 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.092696741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097505057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097691162Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ff82fa1671c22fd2d93d67169a16a05c1caca7b029a11886d3fb53bdd0356d14/merged/etc/passwd: no such file or directory"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.097716015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff82fa1671c22fd2d93d67169a16a05c1caca7b029a11886d3fb53bdd0356d14/merged/etc/group: no such file or directory"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.098004501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.127585174Z" level=info msg="Created container 9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244: kube-system/storage-provisioner/storage-provisioner" id=9a06ffbb-0123-42e0-ab67-36cdd6a1be46 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.128030394Z" level=info msg="Starting container: 9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244" id=b1467f2d-edec-4395-8fc2-0f84696f03c2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:05:59 old-k8s-version-874709 crio[567]: time="2025-12-05T07:05:59.129638465Z" level=info msg="Started container" PID=1778 containerID=9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244 description=kube-system/storage-provisioner/storage-provisioner id=b1467f2d-edec-4395-8fc2-0f84696f03c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9403b42cf53968038c0583742a6622d795b5df21d9a239162cc9ab200b3e8e9
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.994356333Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9e350688-2cba-440b-8178-fb50f7af443d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.995317645Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=61bf287e-1350-4849-b5d9-35f31e9f2812 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.99638489Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=62baa06c-a8a6-444d-be68-7afdf0164744 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:01 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:01.996521771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.001849831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.002309849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.028105896Z" level=info msg="Created container f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=62baa06c-a8a6-444d-be68-7afdf0164744 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.028608919Z" level=info msg="Starting container: f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee" id=7449f139-779d-4a99-9694-17f595afe7e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.030105257Z" level=info msg="Started container" PID=1794 containerID=f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper id=7449f139-779d-4a99-9694-17f595afe7e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62159d1713eef2a7aba0953fad4ebf207b5b35255c8c0a6cf684c79cf4e2c4b
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.101501848Z" level=info msg="Removing container: ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b" id=a7914892-5ca7-4575-ad13-ee4b23056cc5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:02 old-k8s-version-874709 crio[567]: time="2025-12-05T07:06:02.113214074Z" level=info msg="Removed container ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg/dashboard-metrics-scraper" id=a7914892-5ca7-4575-ad13-ee4b23056cc5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f2f2a155f4693       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   b62159d1713ee       dashboard-metrics-scraper-5f989dc9cf-vhgmg       kubernetes-dashboard
	9ba86f612b662       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b9403b42cf539       storage-provisioner                              kube-system
	64c85e718ac4a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   74c7ae571a78b       kubernetes-dashboard-8694d4445c-xn6nb            kubernetes-dashboard
	9fbd3a07129cf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   1a60cf99343d8       busybox                                          default
	52173fac10a5e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   d729cd909fac8       coredns-5dd5756b68-srvvk                         kube-system
	87a1771d8b8eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   b9403b42cf539       storage-provisioner                              kube-system
	189089a1551ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   55708cdd43b71       kindnet-f9lmb                                    kube-system
	d6ff518de54f6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   bf876763196e2       kube-proxy-98jls                                 kube-system
	a5a9622dfd7dc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   3a2ef019cb23c       kube-apiserver-old-k8s-version-874709            kube-system
	6be13235867d4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   1adbd89eeb21e       kube-scheduler-old-k8s-version-874709            kube-system
	7c7e915cc7bec       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   41efa262abe73       etcd-old-k8s-version-874709                      kube-system
	ffe21b4df5d3a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   341a1abe92e7d       kube-controller-manager-old-k8s-version-874709   kube-system
	
	
	==> coredns [52173fac10a5e3ea6e7f6a16a2d0beb412c01dcc4c73551b2d1d4d3d9a969797] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49596 - 8409 "HINFO IN 2101740535183586278.54168126310120430. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.099426689s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-874709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-874709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=old-k8s-version-874709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_04_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-874709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:05:58 +0000   Fri, 05 Dec 2025 07:04:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-874709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                5af588f9-e276-46d0-bc7e-d873d5f0f898
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-srvvk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-874709                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-f9lmb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-874709             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-874709    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-98jls                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-874709             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vhgmg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xn6nb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-874709 event: Registered Node old-k8s-version-874709 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-874709 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)    kubelet          Node old-k8s-version-874709 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-874709 event: Registered Node old-k8s-version-874709 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [7c7e915cc7becaf51abc1256271d87f755bc16e224a0daf6a90d291932385f08] <==
	{"level":"info","ts":"2025-12-05T07:05:25.554788Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-05T07:05:25.554798Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-05T07:05:25.554957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-05T07:05:25.555124Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-05T07:05:25.555291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:05:25.555346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-05T07:05:25.557077Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-05T07:05:25.557842Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-05T07:05:25.558617Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-05T07:05:25.558151Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-05T07:05:25.558488Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-05T07:05:26.646404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-05T07:05:26.646486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.646526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-05T07:05:26.647987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-874709 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-05T07:05:26.647993Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:05:26.64808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:05:26.648188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-05T07:05:26.648213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-05T07:05:26.649557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-05T07:05:26.649662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:06:21 up  1:48,  0 user,  load average: 3.79, 3.33, 2.23
	Linux old-k8s-version-874709 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [189089a1551ba3627eb3128161e1bb599ef06f715efd379e386fde9d94c02bf3] <==
	I1205 07:05:28.603267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:28.603579       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1205 07:05:28.603777       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:28.603801       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:28.603828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:28.805103       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:28.805259       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:28.805277       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:28.805449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:29.105734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:29.105765       1 metrics.go:72] Registering metrics
	I1205 07:05:29.105849       1 controller.go:711] "Syncing nftables rules"
	I1205 07:05:38.805530       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:38.805571       1 main.go:301] handling current node
	I1205 07:05:48.805543       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:48.805591       1 main.go:301] handling current node
	I1205 07:05:58.805282       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:05:58.805311       1 main.go:301] handling current node
	I1205 07:06:08.805247       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:06:08.805286       1 main.go:301] handling current node
	I1205 07:06:18.811459       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1205 07:06:18.811501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5a9622dfd7dc6fdcabf3ea8aec3eaeabfdda77bc311ed906f332cc7d039353d] <==
	I1205 07:05:27.569430       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1205 07:05:27.616870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:27.652362       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 07:05:27.652402       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 07:05:27.652406       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 07:05:27.652498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:05:27.652369       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1205 07:05:27.652882       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 07:05:27.652377       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 07:05:27.669951       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 07:05:27.669988       1 aggregator.go:166] initial CRD sync complete...
	I1205 07:05:27.669994       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 07:05:27.669999       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:05:27.670005       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:05:28.445173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1205 07:05:28.474134       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 07:05:28.489417       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:28.497814       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:28.506206       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 07:05:28.537958       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.150.131"}
	I1205 07:05:28.548917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:05:28.551054       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.11.26"}
	I1205 07:05:40.265508       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 07:05:40.315515       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:05:40.416213       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ffe21b4df5d3a969685218725304cbe5f9fc2b6432a5f7451e96a4edabf288fc] <==
	I1205 07:05:40.370379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.181µs"
	I1205 07:05:40.418402       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1205 07:05:40.419719       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1205 07:05:40.426812       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xn6nb"
	I1205 07:05:40.426931       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	I1205 07:05:40.431287       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:05:40.431409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.27221ms"
	I1205 07:05:40.433151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.870596ms"
	I1205 07:05:40.443033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.540799ms"
	I1205 07:05:40.443101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.943µs"
	I1205 07:05:40.443116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.808304ms"
	I1205 07:05:40.443150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.957µs"
	I1205 07:05:40.448622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.155µs"
	I1205 07:05:40.456258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.328µs"
	I1205 07:05:40.481672       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 07:05:40.481696       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1205 07:05:45.072781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.238781ms"
	I1205 07:05:45.073041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.815µs"
	I1205 07:05:47.071484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.123µs"
	I1205 07:05:48.077002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.273µs"
	I1205 07:05:49.081724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.623µs"
	I1205 07:06:02.110934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.876µs"
	I1205 07:06:02.356973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.109265ms"
	I1205 07:06:02.357308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.863µs"
	I1205 07:06:10.747162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.887µs"
	
	
	==> kube-proxy [d6ff518de54f6fad8b6cef69f6ec5441de106d8cf80d95cb9fd83fa183cec7a0] <==
	I1205 07:05:28.410103       1 server_others.go:69] "Using iptables proxy"
	I1205 07:05:28.419095       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1205 07:05:28.437091       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:28.439375       1 server_others.go:152] "Using iptables Proxier"
	I1205 07:05:28.439404       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 07:05:28.439414       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 07:05:28.439447       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 07:05:28.439666       1 server.go:846] "Version info" version="v1.28.0"
	I1205 07:05:28.439683       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:28.441083       1 config.go:188] "Starting service config controller"
	I1205 07:05:28.441120       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 07:05:28.441155       1 config.go:97] "Starting endpoint slice config controller"
	I1205 07:05:28.441162       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 07:05:28.441388       1 config.go:315] "Starting node config controller"
	I1205 07:05:28.441441       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 07:05:28.541641       1 shared_informer.go:318] Caches are synced for node config
	I1205 07:05:28.541675       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 07:05:28.541692       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6be13235867d468a9e246f51290d3c4f7ea7f6f8510393f2a1b3dab9fbb99a9b] <==
	I1205 07:05:26.056981       1 serving.go:348] Generated self-signed cert in-memory
	W1205 07:05:27.603881       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:05:27.603921       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:05:27.603936       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:05:27.603946       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:05:27.624256       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1205 07:05:27.624291       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:27.626030       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:05:27.626063       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 07:05:27.627221       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1205 07:05:27.627298       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 07:05:27.727198       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474469     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxpp\" (UniqueName: \"kubernetes.io/projected/fd771a55-07e2-4e40-8419-550f7c0bfe62-kube-api-access-xmxpp\") pod \"kubernetes-dashboard-8694d4445c-xn6nb\" (UID: \"fd771a55-07e2-4e40-8419-550f7c0bfe62\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474517     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd771a55-07e2-4e40-8419-550f7c0bfe62-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xn6nb\" (UID: \"fd771a55-07e2-4e40-8419-550f7c0bfe62\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474546     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4105d97-2e10-47da-ad49-7e8b8c808636-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vhgmg\" (UID: \"a4105d97-2e10-47da-ad49-7e8b8c808636\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	Dec 05 07:05:40 old-k8s-version-874709 kubelet[733]: I1205 07:05:40.474572     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txpl\" (UniqueName: \"kubernetes.io/projected/a4105d97-2e10-47da-ad49-7e8b8c808636-kube-api-access-5txpl\") pod \"dashboard-metrics-scraper-5f989dc9cf-vhgmg\" (UID: \"a4105d97-2e10-47da-ad49-7e8b8c808636\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg"
	Dec 05 07:05:45 old-k8s-version-874709 kubelet[733]: I1205 07:05:45.066641     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xn6nb" podStartSLOduration=1.782376309 podCreationTimestamp="2025-12-05 07:05:40 +0000 UTC" firstStartedPulling="2025-12-05 07:05:40.756568773 +0000 UTC m=+15.848699287" lastFinishedPulling="2025-12-05 07:05:44.040773913 +0000 UTC m=+19.132904433" observedRunningTime="2025-12-05 07:05:45.066316261 +0000 UTC m=+20.158446797" watchObservedRunningTime="2025-12-05 07:05:45.066581455 +0000 UTC m=+20.158711974"
	Dec 05 07:05:47 old-k8s-version-874709 kubelet[733]: I1205 07:05:47.060422     733 scope.go:117] "RemoveContainer" containerID="40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: I1205 07:05:48.064286     733 scope.go:117] "RemoveContainer" containerID="40c519087f6367b7281c0cf35cac3fd8621ea8c5e77dcb91ef9fecf71d44e4ba"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: I1205 07:05:48.064497     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:48 old-k8s-version-874709 kubelet[733]: E1205 07:05:48.064884     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:49 old-k8s-version-874709 kubelet[733]: I1205 07:05:49.068562     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:49 old-k8s-version-874709 kubelet[733]: E1205 07:05:49.068936     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:50 old-k8s-version-874709 kubelet[733]: I1205 07:05:50.735651     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:05:50 old-k8s-version-874709 kubelet[733]: E1205 07:05:50.735911     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:05:59 old-k8s-version-874709 kubelet[733]: I1205 07:05:59.090315     733 scope.go:117] "RemoveContainer" containerID="87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c"
	Dec 05 07:06:01 old-k8s-version-874709 kubelet[733]: I1205 07:06:01.993761     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: I1205 07:06:02.100280     733 scope.go:117] "RemoveContainer" containerID="ae00554c1f509d6957ba2b1df7391aae7015c516161ec44a86709b620f7b030b"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: I1205 07:06:02.100514     733 scope.go:117] "RemoveContainer" containerID="f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	Dec 05 07:06:02 old-k8s-version-874709 kubelet[733]: E1205 07:06:02.100882     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:06:10 old-k8s-version-874709 kubelet[733]: I1205 07:06:10.735878     733 scope.go:117] "RemoveContainer" containerID="f2f2a155f4693afe32e510df436d1441d6392f5ccd1000d6607896a80d1fe3ee"
	Dec 05 07:06:10 old-k8s-version-874709 kubelet[733]: E1205 07:06:10.736303     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vhgmg_kubernetes-dashboard(a4105d97-2e10-47da-ad49-7e8b8c808636)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vhgmg" podUID="a4105d97-2e10-47da-ad49-7e8b8c808636"
	Dec 05 07:06:16 old-k8s-version-874709 kubelet[733]: I1205 07:06:16.245390     733 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:06:16 old-k8s-version-874709 systemd[1]: kubelet.service: Consumed 1.385s CPU time.
	
	
	==> kubernetes-dashboard [64c85e718ac4a27fce72eae2812718ae0cc740e18fd72edafe1c18d3566e3a9a] <==
	2025/12/05 07:05:44 Starting overwatch
	2025/12/05 07:05:44 Using namespace: kubernetes-dashboard
	2025/12/05 07:05:44 Using in-cluster config to connect to apiserver
	2025/12/05 07:05:44 Using secret token for csrf signing
	2025/12/05 07:05:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:05:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:05:44 Successful initial request to the apiserver, version: v1.28.0
	2025/12/05 07:05:44 Generating JWE encryption key
	2025/12/05 07:05:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:05:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:05:44 Initializing JWE encryption key from synchronized object
	2025/12/05 07:05:44 Creating in-cluster Sidecar client
	2025/12/05 07:05:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:05:44 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [87a1771d8b8eb6617f0f7a7a79ed8a6ab8883676c7c108c7af5678dd3c70b62c] <==
	I1205 07:05:28.380539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:05:58.382860       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ba86f612b662c54c3c90978cf39aba095be1d6776c8f94e4574540085d32244] <==
	I1205 07:05:59.141158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:05:59.147789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:05:59.147841       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 07:06:16.541989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:06:16.542152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5!
	I1205 07:06:16.542129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064af1fe-2240-4284-9f0a-716d2b949fbe", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5 became leader
	I1205 07:06:16.642412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-874709_1b2f2192-f079-4c31-8770-ec0b7a636ce5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874709 -n old-k8s-version-874709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874709 -n old-k8s-version-874709: exit status 2 (313.049602ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-874709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-008839 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-008839 --alsologtostderr -v=1: exit status 80 (2.01385809s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-008839 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:06:46.221173  380787 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:46.221366  380787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:46.221402  380787 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:46.221420  380787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:46.221785  380787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:46.222153  380787 out.go:368] Setting JSON to false
	I1205 07:06:46.222196  380787 mustload.go:66] Loading cluster: no-preload-008839
	I1205 07:06:46.222725  380787 config.go:182] Loaded profile config "no-preload-008839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:06:46.223274  380787 cli_runner.go:164] Run: docker container inspect no-preload-008839 --format={{.State.Status}}
	I1205 07:06:46.247787  380787 host.go:66] Checking if "no-preload-008839" exists ...
	I1205 07:06:46.248267  380787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:46.320696  380787 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-05 07:06:46.307439827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:46.321537  380787 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-008839 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:06:46.322993  380787 out.go:179] * Pausing node no-preload-008839 ... 
	I1205 07:06:46.324245  380787 host.go:66] Checking if "no-preload-008839" exists ...
	I1205 07:06:46.324590  380787 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:46.324740  380787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-008839
	I1205 07:06:46.348542  380787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/no-preload-008839/id_rsa Username:docker}
	I1205 07:06:46.463638  380787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:46.481353  380787 pause.go:52] kubelet running: true
	I1205 07:06:46.481414  380787 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:46.722488  380787 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:46.722597  380787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:46.803232  380787 cri.go:89] found id: "8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6"
	I1205 07:06:46.803273  380787 cri.go:89] found id: "d5679f317a43257700a6ccf786a90e51b3e511459a6a40b7b87ce098fef9f917"
	I1205 07:06:46.803281  380787 cri.go:89] found id: "041ee86966827d1886c5681f5cc5a2513966eb3b32160dabab858784a89fb062"
	I1205 07:06:46.803287  380787 cri.go:89] found id: "2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6"
	I1205 07:06:46.803292  380787 cri.go:89] found id: "eba75d111920093803e4d959a724517ca2eb3568d86480365967a5d7db5ff7c7"
	I1205 07:06:46.803299  380787 cri.go:89] found id: "6a724b46320af3fc8ab17876c05bc17339d6f6ecdfe81d092e5183ab79c4eff0"
	I1205 07:06:46.803303  380787 cri.go:89] found id: "594bd97237274f1209e2fd22044fdd8fa87336d8f65f7ae5ab3d67cbd890b73e"
	I1205 07:06:46.803307  380787 cri.go:89] found id: "be81b724a08e37b312d3b403f0b0b16774c9d6683375247cd1da277090b0bb4c"
	I1205 07:06:46.803312  380787 cri.go:89] found id: "db01c7251a1de792a86f18e9816a7049b81ed772e45d77eb735784deca6ba7ed"
	I1205 07:06:46.803344  380787 cri.go:89] found id: "796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae"
	I1205 07:06:46.803350  380787 cri.go:89] found id: "c24118d3ceb705dfa27fd02fb7a78d52069c473b9d07b42ae3776ce72626c519"
	I1205 07:06:46.803354  380787 cri.go:89] found id: ""
	I1205 07:06:46.803408  380787 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:46.815815  380787 retry.go:31] will retry after 241.382357ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:46Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:47.058313  380787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:47.075222  380787 pause.go:52] kubelet running: false
	I1205 07:06:47.075291  380787 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:47.287295  380787 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:47.287430  380787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:47.394897  380787 cri.go:89] found id: "8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6"
	I1205 07:06:47.394924  380787 cri.go:89] found id: "d5679f317a43257700a6ccf786a90e51b3e511459a6a40b7b87ce098fef9f917"
	I1205 07:06:47.394931  380787 cri.go:89] found id: "041ee86966827d1886c5681f5cc5a2513966eb3b32160dabab858784a89fb062"
	I1205 07:06:47.394935  380787 cri.go:89] found id: "2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6"
	I1205 07:06:47.394940  380787 cri.go:89] found id: "eba75d111920093803e4d959a724517ca2eb3568d86480365967a5d7db5ff7c7"
	I1205 07:06:47.394945  380787 cri.go:89] found id: "6a724b46320af3fc8ab17876c05bc17339d6f6ecdfe81d092e5183ab79c4eff0"
	I1205 07:06:47.394949  380787 cri.go:89] found id: "594bd97237274f1209e2fd22044fdd8fa87336d8f65f7ae5ab3d67cbd890b73e"
	I1205 07:06:47.394954  380787 cri.go:89] found id: "be81b724a08e37b312d3b403f0b0b16774c9d6683375247cd1da277090b0bb4c"
	I1205 07:06:47.394958  380787 cri.go:89] found id: "db01c7251a1de792a86f18e9816a7049b81ed772e45d77eb735784deca6ba7ed"
	I1205 07:06:47.394970  380787 cri.go:89] found id: "796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae"
	I1205 07:06:47.394974  380787 cri.go:89] found id: "c24118d3ceb705dfa27fd02fb7a78d52069c473b9d07b42ae3776ce72626c519"
	I1205 07:06:47.394978  380787 cri.go:89] found id: ""
	I1205 07:06:47.395024  380787 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:47.417239  380787 retry.go:31] will retry after 499.975737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:47Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:47.918119  380787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:47.930902  380787 pause.go:52] kubelet running: false
	I1205 07:06:47.930954  380787 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:06:48.067978  380787 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:06:48.068090  380787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:06:48.133811  380787 cri.go:89] found id: "8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6"
	I1205 07:06:48.133834  380787 cri.go:89] found id: "d5679f317a43257700a6ccf786a90e51b3e511459a6a40b7b87ce098fef9f917"
	I1205 07:06:48.133840  380787 cri.go:89] found id: "041ee86966827d1886c5681f5cc5a2513966eb3b32160dabab858784a89fb062"
	I1205 07:06:48.133844  380787 cri.go:89] found id: "2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6"
	I1205 07:06:48.133848  380787 cri.go:89] found id: "eba75d111920093803e4d959a724517ca2eb3568d86480365967a5d7db5ff7c7"
	I1205 07:06:48.133854  380787 cri.go:89] found id: "6a724b46320af3fc8ab17876c05bc17339d6f6ecdfe81d092e5183ab79c4eff0"
	I1205 07:06:48.133859  380787 cri.go:89] found id: "594bd97237274f1209e2fd22044fdd8fa87336d8f65f7ae5ab3d67cbd890b73e"
	I1205 07:06:48.133863  380787 cri.go:89] found id: "be81b724a08e37b312d3b403f0b0b16774c9d6683375247cd1da277090b0bb4c"
	I1205 07:06:48.133866  380787 cri.go:89] found id: "db01c7251a1de792a86f18e9816a7049b81ed772e45d77eb735784deca6ba7ed"
	I1205 07:06:48.133874  380787 cri.go:89] found id: "796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae"
	I1205 07:06:48.133879  380787 cri.go:89] found id: "c24118d3ceb705dfa27fd02fb7a78d52069c473b9d07b42ae3776ce72626c519"
	I1205 07:06:48.133884  380787 cri.go:89] found id: ""
	I1205 07:06:48.133933  380787 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:06:48.147948  380787 out.go:203] 
	W1205 07:06:48.149088  380787 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:06:48.149104  380787 out.go:285] * 
	* 
	W1205 07:06:48.153349  380787 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:06:48.154447  380787 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-008839 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-008839
helpers_test.go:243: (dbg) docker inspect no-preload-008839:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	        "Created": "2025-12-05T07:04:31.584731019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366914,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:05:49.009564476Z",
	            "FinishedAt": "2025-12-05T07:05:47.747007176Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hosts",
	        "LogPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55-json.log",
	        "Name": "/no-preload-008839",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-008839:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-008839",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	                "LowerDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-008839",
	                "Source": "/var/lib/docker/volumes/no-preload-008839/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-008839",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-008839",
	                "name.minikube.sigs.k8s.io": "no-preload-008839",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bc32fdb41895d050f08719c0a398da9a6d0a0338fd5531acc261e9034d9a1990",
	            "SandboxKey": "/var/run/docker/netns/bc32fdb41895",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-008839": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b424bb5358c0ff78bed421f719287c2770f3aa97ebe3ad623f9f893abf37a15e",
	                    "EndpointID": "45844ebdd56528fd490117de910d87f89cbd3e29f331f5372b74a7548deffbb4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "da:57:9e:f3:a8:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-008839",
	                        "9ca1114060bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839: exit status 2 (331.607263ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008839 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-008839 logs -n 25: (1.175388658s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                                      │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:26.588234  375543 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:26.588509  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588519  375543 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:26.588525  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588695  375543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:26.589115  375543 out.go:368] Setting JSON to false
	I1205 07:06:26.590262  375543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6531,"bootTime":1764911856,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:26.590314  375543 start.go:143] virtualization: kvm guest
	I1205 07:06:26.592067  375543 out.go:179] * [embed-certs-770390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:26.593635  375543 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:26.593659  375543 notify.go:221] Checking for updates...
	I1205 07:06:26.595966  375543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:26.597221  375543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:26.598431  375543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:26.599882  375543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:26.601166  375543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:26.384025  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:26.384217  375309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.408220  375309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.408239  375309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:06:26.412289  375309 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:06:26.618671  375309 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:06:26.618857  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:26.618897  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json: {Name:mk1a3d1498588cc35fd8c475060bbc66ec8b6678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:26.618949  375309 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618987  375309 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618994  375309 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 62.095µs
	I1205 07:06:26.618958  375309 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619060  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.433µs
	I1205 07:06:26.619070  375309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:06:26.618954  375309 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619075  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:06:26.619080  375309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 78.568µs
	I1205 07:06:26.619083  375309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 131.383µs
	I1205 07:06:26.619092  375309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:06:26.619073  375309 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.619100  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:06:26.619101  375309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619062  375309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619093  375309 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619107  375309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 163.978µs
	I1205 07:06:26.619116  375309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619122  375309 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619139  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:06:26.619147  375309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 56.825µs
	I1205 07:06:26.619164  375309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:06:26.619187  375309 start.go:364] duration metric: took 54.102µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:06:26.619178  375309 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619209  375309 start.go:93] Provisioning new machine with config: &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:26.619295  375309 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:06:26.619290  375309 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619407  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:06:26.619430  375309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 331.673µs
	I1205 07:06:26.619447  375309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:06:26.619268  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:06:26.619462  375309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 475.67µs
	I1205 07:06:26.619474  375309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619482  375309 cache.go:87] Successfully saved all images to host disk.
	I1205 07:06:26.602620  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:26.603160  375543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:26.627216  375543 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:26.627376  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.688879  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.678958971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.689006  375543 docker.go:319] overlay module found
	I1205 07:06:26.690710  375543 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:26.691897  375543 start.go:309] selected driver: docker
	I1205 07:06:26.691911  375543 start.go:927] validating driver "docker" against &{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.692006  375543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:26.692563  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.753344  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.743404439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.753715  375543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:26.753753  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:26.753817  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:26.753868  375543 start.go:353] cluster config:
	{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.755544  375543 out.go:179] * Starting "embed-certs-770390" primary control-plane node in "embed-certs-770390" cluster
	I1205 07:06:26.756738  375543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:26.757980  375543 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:26.759082  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:26.759119  375543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:26.759135  375543 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:26.759194  375543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.759237  375543 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:26.759253  375543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:26.759384  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:26.780168  375543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.780185  375543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:26.780201  375543 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.780233  375543 start.go:360] acquireMachinesLock for embed-certs-770390: {Name:mk0b160cfba8a84d98b6566219365b8df24bf5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.780296  375543 start.go:364] duration metric: took 44.736µs to acquireMachinesLock for "embed-certs-770390"
	I1205 07:06:26.780318  375543 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:26.780342  375543 fix.go:54] fixHost starting: 
	I1205 07:06:26.780580  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:26.799942  375543 fix.go:112] recreateIfNeeded on embed-certs-770390: state=Stopped err=<nil>
	W1205 07:06:26.799979  375543 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:06:23.903235  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:25.904229  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:27.904712  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:26.624904  375309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:06:26.625236  375309 start.go:159] libmachine.API.Create for "newest-cni-624263" (driver="docker")
	I1205 07:06:26.625293  375309 client.go:173] LocalClient.Create starting
	I1205 07:06:26.625440  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 07:06:26.625497  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625526  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.625585  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 07:06:26.625618  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625632  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.626063  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:06:26.645528  375309 cli_runner.go:211] docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:06:26.645637  375309 network_create.go:284] running [docker network inspect newest-cni-624263] to gather additional debugging logs...
	I1205 07:06:26.645660  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263
	W1205 07:06:26.666476  375309 cli_runner.go:211] docker network inspect newest-cni-624263 returned with exit code 1
	I1205 07:06:26.666508  375309 network_create.go:287] error running [docker network inspect newest-cni-624263]: docker network inspect newest-cni-624263: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-624263 not found
	I1205 07:06:26.666525  375309 network_create.go:289] output of [docker network inspect newest-cni-624263]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-624263 not found
	
	** /stderr **
	I1205 07:06:26.666651  375309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:26.685626  375309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d57cb024a629 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ab:20:17:d9:1a} reservation:<nil>}
	I1205 07:06:26.686333  375309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-29ce45f1f3fd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:f2:e1:5a:fb:fd} reservation:<nil>}
	I1205 07:06:26.687062  375309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-18be16a82b81 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:25:6c:b3:f6:c6} reservation:<nil>}
	I1205 07:06:26.687648  375309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-931902d22986 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:1a:d5:72:cd:51} reservation:<nil>}
	I1205 07:06:26.688156  375309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b424bb5358c0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:4c:79:ba:46:30} reservation:<nil>}
	I1205 07:06:26.688952  375309 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7252f408ef75 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:04:ba:35:24:10} reservation:<nil>}
	I1205 07:06:26.689983  375309 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020b7df0}
	I1205 07:06:26.690008  375309 network_create.go:124] attempt to create docker network newest-cni-624263 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1205 07:06:26.690065  375309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-624263 newest-cni-624263
	I1205 07:06:26.743102  375309 network_create.go:108] docker network newest-cni-624263 192.168.103.0/24 created
	I1205 07:06:26.743126  375309 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-624263" container
	I1205 07:06:26.743192  375309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:06:26.762523  375309 cli_runner.go:164] Run: docker volume create newest-cni-624263 --label name.minikube.sigs.k8s.io=newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:06:26.780448  375309 oci.go:103] Successfully created a docker volume newest-cni-624263
	I1205 07:06:26.780537  375309 cli_runner.go:164] Run: docker run --rm --name newest-cni-624263-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --entrypoint /usr/bin/test -v newest-cni-624263:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:06:27.200143  375309 oci.go:107] Successfully prepared a docker volume newest-cni-624263
	I1205 07:06:27.200209  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1205 07:06:27.200286  375309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 07:06:27.200310  375309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 07:06:27.200392  375309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:06:27.265015  375309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-624263 --name newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-624263 --network newest-cni-624263 --ip 192.168.103.2 --volume newest-cni-624263:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:06:27.552297  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Running}}
	I1205 07:06:27.573173  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.593054  375309 cli_runner.go:164] Run: docker exec newest-cni-624263 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:06:27.634139  375309 oci.go:144] the created container "newest-cni-624263" has a running status.
	I1205 07:06:27.634169  375309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa...
	I1205 07:06:27.810850  375309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:06:27.838307  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.864433  375309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:06:27.864459  375309 kic_runner.go:114] Args: [docker exec --privileged newest-cni-624263 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:06:27.914874  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.937979  375309 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.938080  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:27.957892  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.958181  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:27.958199  375309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:28.099298  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.099339  375309 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:06:28.099404  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.118216  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.118434  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.118447  375309 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:06:28.266352  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.266427  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.285381  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.285625  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.285656  375309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:28.421424  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:28.421450  375309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:28.421501  375309 ubuntu.go:190] setting up certificates
	I1205 07:06:28.421519  375309 provision.go:84] configureAuth start
	I1205 07:06:28.421570  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:28.439867  375309 provision.go:143] copyHostCerts
	I1205 07:06:28.439922  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:28.439932  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:28.439988  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:28.440064  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:28.440072  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:28.440097  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:28.440150  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:28.440157  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:28.440178  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:28.440226  375309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:06:28.490526  375309 provision.go:177] copyRemoteCerts
	I1205 07:06:28.490572  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:28.490604  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.508254  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:28.607548  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:28.626034  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:28.643274  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:28.660190  375309 provision.go:87] duration metric: took 238.65746ms to configureAuth
	I1205 07:06:28.660213  375309 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:28.660451  375309 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:06:28.660552  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.678203  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.678454  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.678473  375309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:06:28.964368  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:28.964391  375309 machine.go:97] duration metric: took 1.026387988s to provisionDockerMachine
	I1205 07:06:28.964401  375309 client.go:176] duration metric: took 2.339097815s to LocalClient.Create
	I1205 07:06:28.964417  375309 start.go:167] duration metric: took 2.339183991s to libmachine.API.Create "newest-cni-624263"
	I1205 07:06:28.964424  375309 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:06:28.964437  375309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:28.964496  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:28.964532  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.983132  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.083395  375309 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:29.086772  375309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:29.086801  375309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:29.086821  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:29.086871  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:29.086968  375309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:29.087082  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:29.094830  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:29.113691  375309 start.go:296] duration metric: took 149.256692ms for postStartSetup
	I1205 07:06:29.114008  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.132535  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:29.132800  375309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:29.132848  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.154540  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.253994  375309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:29.258256  375309 start.go:128] duration metric: took 2.638946756s to createHost
	I1205 07:06:29.258278  375309 start.go:83] releasing machines lock for "newest-cni-624263", held for 2.6390804s
	I1205 07:06:29.258357  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.275163  375309 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:29.275199  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.275243  375309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:29.275301  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.292525  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.293433  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.439694  375309 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:29.445781  375309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:29.478433  375309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:29.482835  375309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:29.482896  375309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:29.507064  375309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:06:29.507086  375309 start.go:496] detecting cgroup driver to use...
	I1205 07:06:29.507115  375309 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:29.507154  375309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:29.523263  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:29.534962  375309 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:29.535000  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:29.549931  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:29.566793  375309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:29.650059  375309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:29.736486  375309 docker.go:234] disabling docker service ...
	I1205 07:06:29.736547  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:29.754991  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:29.766663  375309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:29.846539  375309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:29.924690  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:29.936548  375309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:29.950065  375309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:29.950123  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.959781  375309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:29.959833  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.967908  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.975938  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.983900  375309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:29.991260  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.999272  375309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.012680  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.021140  375309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:30.028051  375309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:30.034722  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:30.112871  375309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:30.237839  375309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:30.237906  375309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:30.241691  375309 start.go:564] Will wait 60s for crictl version
	I1205 07:06:30.241747  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.244968  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:30.267110  375309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:30.267179  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.294236  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.323746  375309 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:06:30.324950  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:30.341782  375309 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:30.345513  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:30.356609  375309 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 07:06:28.056673  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:30.560609  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:30.357703  375309 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:30.357837  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:30.357886  375309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:30.381946  375309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:06:30.381975  375309 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:06:30.382034  375309 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.382056  375309 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.382071  375309 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.382087  375309 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.382058  375309 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.382035  375309 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.382041  375309 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.382074  375309 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383617  375309 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.383669  375309 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383686  375309 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.383611  375309 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.383775  375309 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.383990  375309 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.384965  375309 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.385843  375309 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.534923  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.535969  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.541762  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.547313  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.558484  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.574838  375309 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:06:30.574883  375309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.575084  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.578994  375309 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:06:30.579036  375309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.579087  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.587216  375309 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:06:30.587248  375309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.587287  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.601815  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637213  375309 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:06:30.637252  375309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.637293  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637309  375309 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:06:30.637355  375309 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.637389  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.637394  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637440  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.637462  375309 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:06:30.637481  375309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.637510  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.668185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.668206  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.668216  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.668196  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.668257  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.668292  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705400  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.705445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.705403  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705531  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.706185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.706239  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.739595  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:06:30.739704  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:30.741607  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741700  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741619  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.741797  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.744944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.744985  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.745064  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.746956  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:06:30.746987  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:06:30.794130  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794147  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794128  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794178  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794187  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:06:30.794196  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:06:30.794229  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794234  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794261  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:06:30.794338  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:30.836933  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:06:30.836964  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:06:30.838245  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838272  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:06:30.838338  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838364  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:06:30.857777  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952672  375309 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 07:06:30.952727  375309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952794  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.991362  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.049944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.105055  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.161810  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.161973  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.166067  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:06:31.166166  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:26.801554  375543 out.go:252] * Restarting existing docker container for "embed-certs-770390" ...
	I1205 07:06:26.801629  375543 cli_runner.go:164] Run: docker start embed-certs-770390
	I1205 07:06:27.074915  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:27.097444  375543 kic.go:430] container "embed-certs-770390" state is running.
	I1205 07:06:27.097863  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:27.118527  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:27.118771  375543 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.118869  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:27.140642  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.140903  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:27.140920  375543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:27.141707  375543 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53866->127.0.0.1:33128: read: connection reset by peer
	I1205 07:06:30.285862  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.285883  375543 ubuntu.go:182] provisioning hostname "embed-certs-770390"
	I1205 07:06:30.285963  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.306084  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.306389  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.306406  375543 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-770390 && echo "embed-certs-770390" | sudo tee /etc/hostname
	I1205 07:06:30.457639  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.457716  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.475904  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.476118  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.476140  375543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-770390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-770390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-770390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:30.618737  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:30.618762  375543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:30.618787  375543 ubuntu.go:190] setting up certificates
	I1205 07:06:30.618798  375543 provision.go:84] configureAuth start
	I1205 07:06:30.618872  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:30.637076  375543 provision.go:143] copyHostCerts
	I1205 07:06:30.637138  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:30.637151  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:30.637230  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:30.637377  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:30.637400  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:30.637449  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:30.637555  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:30.637567  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:30.637606  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:30.637698  375543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.embed-certs-770390 san=[127.0.0.1 192.168.76.2 embed-certs-770390 localhost minikube]
	I1205 07:06:30.850789  375543 provision.go:177] copyRemoteCerts
	I1205 07:06:30.850846  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:30.850878  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.870854  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:30.979857  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:31.002122  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:31.026307  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:31.050483  375543 provision.go:87] duration metric: took 431.665526ms to configureAuth
	I1205 07:06:31.050515  375543 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:31.050734  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:31.050879  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.077241  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:31.077607  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:31.077644  375543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1205 07:06:30.403214  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:32.403773  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:32.903916  366710 pod_ready.go:94] pod "coredns-7d764666f9-bvbhf" is "Ready"
	I1205 07:06:32.903942  366710 pod_ready.go:86] duration metric: took 34.00575162s for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.906601  366710 pod_ready.go:83] waiting for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.913301  366710 pod_ready.go:94] pod "etcd-no-preload-008839" is "Ready"
	I1205 07:06:32.913400  366710 pod_ready.go:86] duration metric: took 6.777304ms for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.915636  366710 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.919542  366710 pod_ready.go:94] pod "kube-apiserver-no-preload-008839" is "Ready"
	I1205 07:06:32.919566  366710 pod_ready.go:86] duration metric: took 3.909248ms for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.921563  366710 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.101533  366710 pod_ready.go:94] pod "kube-controller-manager-no-preload-008839" is "Ready"
	I1205 07:06:33.101569  366710 pod_ready.go:86] duration metric: took 179.984485ms for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.301800  366710 pod_ready.go:83] waiting for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.702088  366710 pod_ready.go:94] pod "kube-proxy-s9zn2" is "Ready"
	I1205 07:06:33.702116  366710 pod_ready.go:86] duration metric: took 400.29234ms for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:31.721865  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:31.721894  375543 machine.go:97] duration metric: took 4.603106939s to provisionDockerMachine
	I1205 07:06:31.721911  375543 start.go:293] postStartSetup for "embed-certs-770390" (driver="docker")
	I1205 07:06:31.721926  375543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:31.721985  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:31.722034  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.745060  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:31.850959  375543 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:31.854831  375543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:31.854862  375543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:31.854875  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:31.854930  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:31.855030  375543 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:31.855158  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:31.863927  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:31.883380  375543 start.go:296] duration metric: took 161.454914ms for postStartSetup
	I1205 07:06:31.883456  375543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:31.883520  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.906830  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.008279  375543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:32.013614  375543 fix.go:56] duration metric: took 5.233266702s for fixHost
	I1205 07:06:32.013639  375543 start.go:83] releasing machines lock for "embed-certs-770390", held for 5.233329197s
	I1205 07:06:32.013713  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:32.035130  375543 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:32.035191  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.035218  375543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:32.035305  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.059370  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.060657  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.825514  375543 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:32.832229  375543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:32.867423  375543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:32.872157  375543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:32.872230  375543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:32.880841  375543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:06:32.880864  375543 start.go:496] detecting cgroup driver to use...
	I1205 07:06:32.880892  375543 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:32.880945  375543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:32.897262  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:32.913628  375543 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:32.913679  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:32.931183  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:32.943212  375543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:33.031242  375543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:33.124377  375543 docker.go:234] disabling docker service ...
	I1205 07:06:33.124432  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:33.138291  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:33.150719  375543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:33.243720  375543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:33.334574  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:33.346746  375543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:33.360678  375543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:33.360741  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.369727  375543 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:33.369786  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.378916  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.387258  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.395950  375543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:33.405206  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.415134  375543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.425222  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.434369  375543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:33.442019  375543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:33.449717  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:33.543423  375543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:33.975505  375543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:33.975586  375543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:33.979949  375543 start.go:564] Will wait 60s for crictl version
	I1205 07:06:33.980033  375543 ssh_runner.go:195] Run: which crictl
	I1205 07:06:33.984307  375543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:34.008163  375543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:34.008225  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.036756  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.070974  375543 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:06:33.902396  366710 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301736  366710 pod_ready.go:94] pod "kube-scheduler-no-preload-008839" is "Ready"
	I1205 07:06:34.301762  366710 pod_ready.go:86] duration metric: took 399.341028ms for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301777  366710 pod_ready.go:40] duration metric: took 35.406378156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:34.356972  366710 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:06:34.358967  366710 out.go:179] * Done! kubectl is now configured to use "no-preload-008839" cluster and "default" namespace by default
	I1205 07:06:34.071865  375543 cli_runner.go:164] Run: docker network inspect embed-certs-770390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:34.089273  375543 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:34.093527  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.104382  375543 kubeadm.go:884] updating cluster {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:34.104493  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:34.104533  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.135986  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.136005  375543 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:06:34.136046  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.163958  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.163976  375543 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:06:34.163982  375543 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1205 07:06:34.164096  375543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-770390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:34.164159  375543 ssh_runner.go:195] Run: crio config
	I1205 07:06:34.210786  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:34.210808  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:34.210819  375543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:06:34.210839  375543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-770390 NodeName:embed-certs-770390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:34.210959  375543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-770390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:34.211023  375543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:06:34.219056  375543 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:06:34.219118  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:34.227080  375543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1205 07:06:34.239752  375543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:06:34.251999  375543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1205 07:06:34.263865  375543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:34.267417  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.277134  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:34.394783  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:34.419292  375543 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390 for IP: 192.168.76.2
	I1205 07:06:34.419313  375543 certs.go:195] generating shared ca certs ...
	I1205 07:06:34.419352  375543 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:34.419526  375543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:34.419586  375543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:34.419598  375543 certs.go:257] generating profile certs ...
	I1205 07:06:34.419694  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key
	I1205 07:06:34.419767  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e
	I1205 07:06:34.419858  375543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key
	I1205 07:06:34.420010  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:34.420057  375543 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:34.420071  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:34.420110  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:34.420143  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:34.420172  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:34.420226  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:34.421032  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:34.440844  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:34.465635  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:34.487656  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:34.511641  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 07:06:34.535311  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:34.552834  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:34.570691  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:06:34.588483  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:34.605748  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:34.624519  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:34.644092  375543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:34.657592  375543 ssh_runner.go:195] Run: openssl version
	I1205 07:06:34.663869  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.673595  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:34.683140  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688216  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688277  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.738387  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:34.748071  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.757769  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:34.767020  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770922  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770972  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.813377  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:34.823642  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.833453  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:34.841565  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846018  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846067  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.881430  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:34.888928  375543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:34.892723  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:06:34.932540  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:06:34.979914  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:06:35.029643  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:06:35.084612  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:06:35.132242  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:06:35.171706  375543 kubeadm.go:401] StartCluster: {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:35.171804  375543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:35.171880  375543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:35.202472  375543 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:06:35.202495  375543 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:06:35.202501  375543 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:06:35.202506  375543 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:06:35.202514  375543 cri.go:89] found id: ""
	I1205 07:06:35.202559  375543 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:06:35.214717  375543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:35Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:35.214778  375543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:35.223159  375543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:06:35.223177  375543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:06:35.223230  375543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:06:35.231356  375543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:06:35.232131  375543 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-770390" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.232612  375543 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-770390" cluster setting kubeconfig missing "embed-certs-770390" context setting]
	I1205 07:06:35.233423  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.235317  375543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:06:35.242634  375543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1205 07:06:35.242665  375543 kubeadm.go:602] duration metric: took 19.477371ms to restartPrimaryControlPlane
	I1205 07:06:35.242675  375543 kubeadm.go:403] duration metric: took 70.981616ms to StartCluster
	I1205 07:06:35.242690  375543 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.242761  375543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.244041  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.244259  375543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:35.244338  375543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:35.244434  375543 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-770390"
	I1205 07:06:35.244450  375543 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-770390"
	W1205 07:06:35.244462  375543 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:06:35.244471  375543 addons.go:70] Setting dashboard=true in profile "embed-certs-770390"
	I1205 07:06:35.244496  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244500  375543 addons.go:239] Setting addon dashboard=true in "embed-certs-770390"
	W1205 07:06:35.244519  375543 addons.go:248] addon dashboard should already be in state true
	I1205 07:06:35.244510  375543 addons.go:70] Setting default-storageclass=true in profile "embed-certs-770390"
	I1205 07:06:35.244540  375543 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-770390"
	I1205 07:06:35.244551  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244494  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:35.244825  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.244991  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.245043  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.247149  375543 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:35.248386  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:35.272894  375543 addons.go:239] Setting addon default-storageclass=true in "embed-certs-770390"
	W1205 07:06:35.272915  375543 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:06:35.272939  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.273400  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.275193  375543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:35.275251  375543 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:06:35.276704  375543 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.276758  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:35.276764  375543 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:06:33.056148  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:35.060453  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:31.366255  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:06:32.346995  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.184991035s)
	I1205 07:06:32.347021  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:06:32.347055  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347104  375309 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:06:32.347120  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347138  375309 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:06:32.347061  375309 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.180871282s)
	I1205 07:06:32.347169  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:06:32.347188  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:32.347192  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 07:06:33.570397  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.223258044s)
	I1205 07:06:33.570426  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:06:33.570455  375309 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570499  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570511  375309 ssh_runner.go:235] Completed: which crictl: (1.223307009s)
	I1205 07:06:33.570561  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:34.893160  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.322638807s)
	I1205 07:06:34.893187  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:06:34.893208  375309 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893215  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.322634396s)
	I1205 07:06:34.893245  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893276  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:35.276808  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.277808  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:06:35.277826  375543 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:06:35.277888  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.301215  375543 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.301315  375543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:35.301418  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.308857  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.320257  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.332128  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.426032  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:35.431462  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:06:35.431489  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:06:35.438950  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.447296  375543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:35.451227  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.451848  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:06:35.451886  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:06:35.468647  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:06:35.468668  375543 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:06:35.498954  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:06:35.498976  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:06:35.545774  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:06:35.545808  375543 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:06:35.588544  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:06:35.588570  375543 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:06:35.610093  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:06:35.610117  375543 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:06:35.644554  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:06:35.644601  375543 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:06:35.667656  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:35.667682  375543 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:06:35.688651  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:37.536634  375543 node_ready.go:49] node "embed-certs-770390" is "Ready"
	I1205 07:06:37.536671  375543 node_ready.go:38] duration metric: took 2.089351455s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:37.536687  375543 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:37.536743  375543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:38.146255  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.707271235s)
	I1205 07:06:38.146314  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.695052574s)
	I1205 07:06:38.146429  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.457746781s)
	I1205 07:06:38.146472  375543 api_server.go:72] duration metric: took 2.902184723s to wait for apiserver process to appear ...
	I1205 07:06:38.146527  375543 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:38.146554  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.147993  375543 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-770390 addons enable metrics-server
	
	I1205 07:06:38.154740  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.154761  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:38.160172  375543 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1205 07:06:37.561481  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:40.055806  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:36.440601  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.547331042s)
	I1205 07:06:36.440633  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:06:36.440654  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440666  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.547364518s)
	I1205 07:06:36.440699  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440737  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:38.061822  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.621051807s)
	I1205 07:06:38.061871  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.621152631s)
	I1205 07:06:38.061900  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:06:38.061925  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.061878  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1205 07:06:38.061986  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.062043  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:38.066235  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:06:38.066269  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:06:39.480656  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.418643669s)
	I1205 07:06:39.480686  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:06:39.480713  375309 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:39.480763  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:40.059650  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:06:40.059692  375309 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.059745  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.168218  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:06:40.168260  375309 cache_images.go:125] Successfully loaded all cached images
	I1205 07:06:40.168267  375309 cache_images.go:94] duration metric: took 9.786277822s to LoadCachedImages
	I1205 07:06:40.168281  375309 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:06:40.168392  375309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:40.168461  375309 ssh_runner.go:195] Run: crio config
	I1205 07:06:40.215126  375309 cni.go:84] Creating CNI manager for ""
	I1205 07:06:40.215148  375309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:40.215165  375309 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:06:40.215185  375309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:40.215294  375309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:40.215371  375309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.223545  375309 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:06:40.223608  375309 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:06:40.231452  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:06:40.231550  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:06:40.231600  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:06:40.231616  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:40.236450  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:06:40.236478  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:06:40.236508  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:06:40.236532  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:06:40.253269  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:06:40.289073  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:06:40.289104  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:06:40.688980  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:40.696712  375309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:06:40.710980  375309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:06:40.726034  375309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:06:40.738766  375309 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:40.742492  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:40.752230  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:40.831660  375309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:40.858130  375309 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:06:40.858175  375309 certs.go:195] generating shared ca certs ...
	I1205 07:06:40.858196  375309 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.858496  375309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:40.858561  375309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:40.858573  375309 certs.go:257] generating profile certs ...
	I1205 07:06:40.858645  375309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:06:40.858659  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt with IP's: []
	I1205 07:06:40.893856  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt ...
	I1205 07:06:40.893898  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt: {Name:mk2b6195b99d5e275f660429f3814d5bdcd8191d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894105  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key ...
	I1205 07:06:40.894140  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key: {Name:mke407b69941bd64dfca0f6ab1c80bb1c45b93ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894275  375309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:06:40.894306  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1205 07:06:40.941482  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada ...
	I1205 07:06:40.941507  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada: {Name:mk677ad869a55b9090eb744dc3feff29e8064497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941661  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada ...
	I1205 07:06:40.941680  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada: {Name:mkb7c70fb23c29d27bdcbb21d4add4953a296250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941769  375309 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt
	I1205 07:06:40.941862  375309 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key
	I1205 07:06:40.941930  375309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:06:40.941945  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt with IP's: []
	I1205 07:06:41.076769  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt ...
	I1205 07:06:41.076794  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt: {Name:mke1ae4d7cafe67dff134743b1bfeb82268bc450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.076927  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key ...
	I1205 07:06:41.076940  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key: {Name:mk11a3d7395501747e70db233d7500d344284191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.077110  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:41.077146  375309 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:41.077156  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:41.077191  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:41.077216  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:41.077245  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:41.077285  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:41.077869  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:41.097495  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:41.114088  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:41.131277  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:41.148175  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:06:41.168203  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:41.190211  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:38.161254  375543 addons.go:530] duration metric: took 2.916934723s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:06:38.647484  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.654056  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.654081  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:39.147586  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:39.152741  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1205 07:06:39.153911  375543 api_server.go:141] control plane version: v1.34.2
	I1205 07:06:39.153938  375543 api_server.go:131] duration metric: took 1.007398463s to wait for apiserver health ...
	I1205 07:06:39.153949  375543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:39.158877  375543 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:39.158918  375543 system_pods.go:61] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.158931  375543 system_pods.go:61] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.158944  375543 system_pods.go:61] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.158959  375543 system_pods.go:61] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.158971  375543 system_pods.go:61] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.158984  375543 system_pods.go:61] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.158989  375543 system_pods.go:61] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.158999  375543 system_pods.go:61] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.159007  375543 system_pods.go:74] duration metric: took 5.050804ms to wait for pod list to return data ...
	I1205 07:06:39.159021  375543 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:39.161392  375543 default_sa.go:45] found service account: "default"
	I1205 07:06:39.161413  375543 default_sa.go:55] duration metric: took 2.38628ms for default service account to be created ...
	I1205 07:06:39.161420  375543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:06:39.163935  375543 system_pods.go:86] 8 kube-system pods found
	I1205 07:06:39.163966  375543 system_pods.go:89] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.163978  375543 system_pods.go:89] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.163992  375543 system_pods.go:89] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.164005  375543 system_pods.go:89] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.164016  375543 system_pods.go:89] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.164027  375543 system_pods.go:89] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.164038  375543 system_pods.go:89] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.164055  375543 system_pods.go:89] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.164067  375543 system_pods.go:126] duration metric: took 2.64117ms to wait for k8s-apps to be running ...
	I1205 07:06:39.164079  375543 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:06:39.164127  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:39.181008  375543 system_svc.go:56] duration metric: took 16.921756ms WaitForService to wait for kubelet
	I1205 07:06:39.181041  375543 kubeadm.go:587] duration metric: took 3.936753325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:39.181064  375543 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:39.184000  375543 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:39.184034  375543 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:39.184053  375543 node_conditions.go:105] duration metric: took 2.982688ms to run NodePressure ...
	I1205 07:06:39.184070  375543 start.go:242] waiting for startup goroutines ...
	I1205 07:06:39.184085  375543 start.go:247] waiting for cluster config update ...
	I1205 07:06:39.184102  375543 start.go:256] writing updated cluster config ...
	I1205 07:06:39.193568  375543 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:39.197314  375543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:39.200374  375543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:41.204973  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:41.212073  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:06:41.231583  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:41.253120  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:41.272824  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:41.292610  375309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:41.308462  375309 ssh_runner.go:195] Run: openssl version
	I1205 07:06:41.316714  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.325091  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:41.332343  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336139  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336194  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.372232  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.379524  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.386631  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.393737  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:41.401581  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405466  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405515  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.439825  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:41.447189  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:06:41.455927  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.463164  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:41.470435  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.473992  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.474034  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.515208  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:41.525475  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0
	I1205 07:06:41.535050  375309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:41.540368  375309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:06:41.540428  375309 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:41.540520  375309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:41.540579  375309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:41.574193  375309 cri.go:89] found id: ""
	I1205 07:06:41.574260  375309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:41.582447  375309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:06:41.590634  375309 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:06:41.590683  375309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:06:41.598032  375309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:06:41.598048  375309 kubeadm.go:158] found existing configuration files:
	
	I1205 07:06:41.598083  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:06:41.605848  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:06:41.605900  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:06:41.613213  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:06:41.620371  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:06:41.620417  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:06:41.627391  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.634542  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:06:41.634592  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.641338  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:06:41.648894  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:06:41.648944  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:06:41.656607  375309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:06:41.696598  375309 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:06:41.696706  375309 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:06:41.759716  375309 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:06:41.759824  375309 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 07:06:41.759883  375309 kubeadm.go:319] OS: Linux
	I1205 07:06:41.759954  375309 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:06:41.760020  375309 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:06:41.760091  375309 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:06:41.760146  375309 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:06:41.760192  375309 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:06:41.760252  375309 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:06:41.760365  375309 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:06:41.760434  375309 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 07:06:41.814175  375309 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:06:41.814315  375309 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:06:41.814467  375309 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:06:41.827236  375309 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:06:41.830237  375309 out.go:252]   - Generating certificates and keys ...
	I1205 07:06:41.830391  375309 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:06:41.830478  375309 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:06:41.861271  375309 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:06:42.094457  375309 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:06:42.144264  375309 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:06:42.276913  375309 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:06:42.446846  375309 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:06:42.447034  375309 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.609304  375309 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:06:42.609696  375309 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.767082  375309 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:06:43.048880  375309 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:06:43.119451  375309 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:06:43.119727  375309 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:06:43.389014  375309 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:06:43.643799  375309 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:06:43.853126  375309 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:06:44.168810  375309 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:06:44.219881  375309 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:06:44.220746  375309 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:06:44.227994  375309 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1205 07:06:42.556667  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:44.557029  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:44.229477  375309 out.go:252]   - Booting up control plane ...
	I1205 07:06:44.229641  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:06:44.229761  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:06:44.230667  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:06:44.249377  375309 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:06:44.249530  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:06:44.258992  375309 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:06:44.259591  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:06:44.259660  375309 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:06:44.400746  375309 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:06:44.400911  375309 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:06:45.401590  375309 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00117802s
	I1205 07:06:45.405602  375309 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:06:45.405744  375309 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1205 07:06:45.405949  375309 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:06:45.406099  375309 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1205 07:06:43.207479  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:45.732411  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.067334184Z" level=info msg="Started container" PID=1741 containerID=be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper id=fc2097e5-2162-40c0-9c21-12f6f3a4bbf6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fffdf3b91e706cc7f50911009201999a02a1aa0fa55d1541d5d34a7d6dc529
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.105141132Z" level=info msg="Removing container: fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab" id=679f3d13-259d-43f0-b2a5-1376e82a80a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.116376604Z" level=info msg="Removed container fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=679f3d13-259d-43f0-b2a5-1376e82a80a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.13131289Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=35a17115-95e6-47b2-9e96-e52cacc4c075 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.132359757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51bf6922-1320-4546-852e-3c8db2f54541 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.133524597Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4f88e49d-8e74-4aa4-b145-129235ffc7dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.133664972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137731586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137865193Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/607764f379c5c9d369ecf353f9d5deaecdb446689c6a9700bf943f17565851c8/merged/etc/passwd: no such file or directory"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137889253Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/607764f379c5c9d369ecf353f9d5deaecdb446689c6a9700bf943f17565851c8/merged/etc/group: no such file or directory"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.138079466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.164394854Z" level=info msg="Created container 8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6: kube-system/storage-provisioner/storage-provisioner" id=4f88e49d-8e74-4aa4-b145-129235ffc7dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.164980358Z" level=info msg="Starting container: 8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6" id=68be4556-334e-48fb-93d6-ce8ec979900b name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.167126435Z" level=info msg="Started container" PID=1756 containerID=8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6 description=kube-system/storage-provisioner/storage-provisioner id=68be4556-334e-48fb-93d6-ce8ec979900b name=/runtime.v1.RuntimeService/StartContainer sandboxID=086bd8c723c626a8a55dad439fb64c41d101f88e90a9e8124fcbc802653232ef
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.019786187Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=37cf8cf3-8efe-4594-bc91-c0a5408afde7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.02068776Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3564dc5-c2e0-474b-8c0e-28484104ce4f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.021747444Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=2b911702-7c37-4a23-906d-c258e73a17bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.021884832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.027486444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.02798781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.064086838Z" level=info msg="Created container 796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=2b911702-7c37-4a23-906d-c258e73a17bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.064733228Z" level=info msg="Starting container: 796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae" id=0ed12475-668e-488a-8d92-4bf60ccc5568 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.066854478Z" level=info msg="Started container" PID=1794 containerID=796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper id=0ed12475-668e-488a-8d92-4bf60ccc5568 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fffdf3b91e706cc7f50911009201999a02a1aa0fa55d1541d5d34a7d6dc529
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.16803193Z" level=info msg="Removing container: be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded" id=aa9d1c77-c381-48e0-9306-e2a68ab136d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.178742771Z" level=info msg="Removed container be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=aa9d1c77-c381-48e0-9306-e2a68ab136d0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	796166f8aad13       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   67fffdf3b91e7       dashboard-metrics-scraper-867fb5f87b-nqpzm   kubernetes-dashboard
	8af45e76145b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   086bd8c723c62       storage-provisioner                          kube-system
	c24118d3ceb70       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   95879088ec6b1       kubernetes-dashboard-b84665fb8-cwnkq         kubernetes-dashboard
	d5679f317a432       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   12497d4ae7c07       coredns-7d764666f9-bvbhf                     kube-system
	1a8a87158e5ee       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   3daccd5763d6a       busybox                                      default
	041ee86966827       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           50 seconds ago      Running             kube-proxy                  0                   4147cce926d40       kube-proxy-s9zn2                             kube-system
	2073d619fdee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   086bd8c723c62       storage-provisioner                          kube-system
	eba75d1119200       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   87f8473c4e891       kindnet-k65q9                                kube-system
	6a724b46320af       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           53 seconds ago      Running             kube-apiserver              0                   0bd62f7ea060c       kube-apiserver-no-preload-008839             kube-system
	594bd97237274       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           53 seconds ago      Running             kube-scheduler              0                   98b924fd0d6ee       kube-scheduler-no-preload-008839             kube-system
	be81b724a08e3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           53 seconds ago      Running             kube-controller-manager     0                   39dc00d7688a8       kube-controller-manager-no-preload-008839    kube-system
	db01c7251a1de       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   7f0477a8eef8f       etcd-no-preload-008839                       kube-system
	
	
	==> coredns [d5679f317a43257700a6ccf786a90e51b3e511459a6a40b7b87ce098fef9f917] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47478 - 48259 "HINFO IN 5639324877831771745.7104423327596062010. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078795035s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-008839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-008839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=no-preload-008839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-008839
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:05:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-008839
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                fb2974e4-0c42-4f11-b1e5-d1c92fcbd635
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-bvbhf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-008839                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-k65q9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-008839              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-008839     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-s9zn2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-008839              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nqpzm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cwnkq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node no-preload-008839 event: Registered Node no-preload-008839 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node no-preload-008839 event: Registered Node no-preload-008839 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [db01c7251a1de792a86f18e9816a7049b81ed772e45d77eb735784deca6ba7ed] <==
	{"level":"warn","ts":"2025-12-05T07:05:56.747821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.753982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.762620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.768624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.774993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.781719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.787954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.794117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.800840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.808164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.819733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.832361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.838905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.845676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.853086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.860515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.866791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.873261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.880175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.888466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.905360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.911303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.917300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.924347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.971615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:06:49 up  1:49,  0 user,  load average: 3.59, 3.31, 2.26
	Linux no-preload-008839 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eba75d111920093803e4d959a724517ca2eb3568d86480365967a5d7db5ff7c7] <==
	I1205 07:05:58.652746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:58.652998       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:05:58.653162       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:58.653178       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:58.653200       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:58.762852       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:58.762902       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:58.762919       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:58.852557       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:59.252277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:59.252310       1 metrics.go:72] Registering metrics
	I1205 07:05:59.252391       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:08.763019       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:08.763101       1 main.go:301] handling current node
	I1205 07:06:18.763006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:18.763041       1 main.go:301] handling current node
	I1205 07:06:28.763766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:28.763826       1 main.go:301] handling current node
	I1205 07:06:38.768495       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:38.768558       1 main.go:301] handling current node
	I1205 07:06:48.769435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:48.769484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6a724b46320af3fc8ab17876c05bc17339d6f6ecdfe81d092e5183ab79c4eff0] <==
	I1205 07:05:57.429843       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:05:57.429850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:05:57.429856       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:05:57.430078       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.430136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:05:57.430287       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.430430       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 07:05:57.430669       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:05:57.431790       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.433022       1 policy_source.go:248] refreshing policies
	E1205 07:05:57.436497       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 07:05:57.439055       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:05:57.444187       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:57.667808       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:05:57.695603       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:05:57.711160       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:57.717702       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:57.723421       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:05:57.750303       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.29.76"}
	I1205 07:05:57.760459       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.174.171"}
	I1205 07:05:58.334106       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:06:00.973937       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:06:01.022313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:01.022313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:01.272835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [be81b724a08e37b312d3b403f0b0b16774c9d6683375247cd1da277090b0bb4c] <==
	I1205 07:06:00.574298       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:00.574304       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573886       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573915       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573896       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573921       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573900       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573914       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573932       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.574542       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573927       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573878       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573936       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573825       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.574670       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.575171       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1205 07:06:00.575280       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.575446       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-008839"
	I1205 07:06:00.575529       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1205 07:06:00.583387       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:00.587724       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.674133       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.674151       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:06:00.674155       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:06:00.684505       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [041ee86966827d1886c5681f5cc5a2513966eb3b32160dabab858784a89fb062] <==
	I1205 07:05:58.436526       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:05:58.504768       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:58.605123       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:58.605162       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:05:58.605255       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:05:58.623395       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:58.623434       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:05:58.628652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:05:58.628992       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:05:58.629011       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:58.630379       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:05:58.630446       1 config.go:200] "Starting service config controller"
	I1205 07:05:58.630464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:05:58.630473       1 config.go:309] "Starting node config controller"
	I1205 07:05:58.630481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:05:58.630447       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:05:58.630488       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:05:58.630412       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:05:58.630496       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:05:58.731175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:05:58.731189       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:05:58.731204       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [594bd97237274f1209e2fd22044fdd8fa87336d8f65f7ae5ab3d67cbd890b73e] <==
	I1205 07:05:55.704918       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:05:57.352604       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:05:57.352660       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:05:57.352672       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:05:57.352682       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:05:57.386741       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1205 07:05:57.386841       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:57.391163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:05:57.391200       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:57.391357       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:05:57.391513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:05:57.491334       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:06:14 no-preload-008839 kubelet[719]: E1205 07:06:14.090571     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-008839" containerName="kube-apiserver"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: E1205 07:06:17.568207     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: I1205 07:06:17.568679     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: E1205 07:06:17.568915     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.018984     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.019037     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.103899     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.104190     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.104232     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.104464     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: E1205 07:06:27.567688     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: I1205 07:06:27.567730     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: E1205 07:06:27.567934     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:29 no-preload-008839 kubelet[719]: I1205 07:06:29.130840     719 scope.go:122] "RemoveContainer" containerID="2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6"
	Dec 05 07:06:32 no-preload-008839 kubelet[719]: E1205 07:06:32.412426     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bvbhf" containerName="coredns"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.019196     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.019232     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.166220     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.166528     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.166569     719 scope.go:122] "RemoveContainer" containerID="796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.166852     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:46 no-preload-008839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:06:46 no-preload-008839 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:06:46 no-preload-008839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:06:46 no-preload-008839 systemd[1]: kubelet.service: Consumed 1.674s CPU time.
	
	
	==> kubernetes-dashboard [c24118d3ceb705dfa27fd02fb7a78d52069c473b9d07b42ae3776ce72626c519] <==
	2025/12/05 07:06:04 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:04 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:04 Using secret token for csrf signing
	2025/12/05 07:06:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/05 07:06:04 Generating JWE encryption key
	2025/12/05 07:06:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:05 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:05 Creating in-cluster Sidecar client
	2025/12/05 07:06:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:05 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:04 Starting overwatch
	
	
	==> storage-provisioner [2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6] <==
	I1205 07:05:58.403691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:06:28.406738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6] <==
	I1205 07:06:29.179998       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:06:29.187381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:06:29.187430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:06:29.189221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:32.644087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:36.905438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:40.504477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:43.558639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.581727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.587984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:06:46.588221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:06:46.588824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60a68084-c5d5-49bc-8273-b0880be31ea1", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24 became leader
	I1205 07:06:46.588900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24!
	W1205 07:06:46.592311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.598837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:06:46.689525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24!
	W1205 07:06:48.603826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:48.608436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008839 -n no-preload-008839: exit status 2 (368.249057ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-008839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-008839
helpers_test.go:243: (dbg) docker inspect no-preload-008839:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	        "Created": "2025-12-05T07:04:31.584731019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366914,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:05:49.009564476Z",
	            "FinishedAt": "2025-12-05T07:05:47.747007176Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/hosts",
	        "LogPath": "/var/lib/docker/containers/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55/9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55-json.log",
	        "Name": "/no-preload-008839",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-008839:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-008839",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ca1114060bc3fc8924c4b0294520b7ed35c443f090b777d5720743c0e356e55",
	                "LowerDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc343032c31bd42f0149910f30b554879889c6f89a9afccd097c0b1463eda47f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-008839",
	                "Source": "/var/lib/docker/volumes/no-preload-008839/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-008839",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-008839",
	                "name.minikube.sigs.k8s.io": "no-preload-008839",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bc32fdb41895d050f08719c0a398da9a6d0a0338fd5531acc261e9034d9a1990",
	            "SandboxKey": "/var/run/docker/netns/bc32fdb41895",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-008839": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b424bb5358c0ff78bed421f719287c2770f3aa97ebe3ad623f9f893abf37a15e",
	                    "EndpointID": "45844ebdd56528fd490117de910d87f89cbd3e29f331f5372b74a7548deffbb4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "da:57:9e:f3:a8:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-008839",
	                        "9ca1114060bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839: exit status 2 (322.696979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008839 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-008839 logs -n 25: (1.286486244s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-245906                                                                                                                                                                                                                      │ disable-driver-mounts-245906 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:04 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:04 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-874709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:26.588234  375543 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:26.588509  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588519  375543 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:26.588525  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588695  375543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:26.589115  375543 out.go:368] Setting JSON to false
	I1205 07:06:26.590262  375543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6531,"bootTime":1764911856,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:26.590314  375543 start.go:143] virtualization: kvm guest
	I1205 07:06:26.592067  375543 out.go:179] * [embed-certs-770390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:26.593635  375543 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:26.593659  375543 notify.go:221] Checking for updates...
	I1205 07:06:26.595966  375543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:26.597221  375543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:26.598431  375543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:26.599882  375543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:26.601166  375543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:26.384025  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:26.384217  375309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.408220  375309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.408239  375309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:06:26.412289  375309 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:06:26.618671  375309 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:06:26.618857  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:26.618897  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json: {Name:mk1a3d1498588cc35fd8c475060bbc66ec8b6678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:26.618949  375309 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618987  375309 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618994  375309 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 62.095µs
	I1205 07:06:26.618958  375309 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619060  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.433µs
	I1205 07:06:26.619070  375309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:06:26.618954  375309 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619075  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:06:26.619080  375309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 78.568µs
	I1205 07:06:26.619083  375309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 131.383µs
	I1205 07:06:26.619092  375309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:06:26.619073  375309 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.619100  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:06:26.619101  375309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619062  375309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619093  375309 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619107  375309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 163.978µs
	I1205 07:06:26.619116  375309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619122  375309 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619139  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:06:26.619147  375309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 56.825µs
	I1205 07:06:26.619164  375309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:06:26.619187  375309 start.go:364] duration metric: took 54.102µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:06:26.619178  375309 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619209  375309 start.go:93] Provisioning new machine with config: &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:26.619295  375309 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:06:26.619290  375309 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619407  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:06:26.619430  375309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 331.673µs
	I1205 07:06:26.619447  375309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:06:26.619268  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:06:26.619462  375309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 475.67µs
	I1205 07:06:26.619474  375309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619482  375309 cache.go:87] Successfully saved all images to host disk.
	I1205 07:06:26.602620  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:26.603160  375543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:26.627216  375543 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:26.627376  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.688879  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.678958971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.689006  375543 docker.go:319] overlay module found
	I1205 07:06:26.690710  375543 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:26.691897  375543 start.go:309] selected driver: docker
	I1205 07:06:26.691911  375543 start.go:927] validating driver "docker" against &{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.692006  375543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:26.692563  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.753344  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.743404439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.753715  375543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:26.753753  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:26.753817  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:26.753868  375543 start.go:353] cluster config:
	{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.755544  375543 out.go:179] * Starting "embed-certs-770390" primary control-plane node in "embed-certs-770390" cluster
	I1205 07:06:26.756738  375543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:26.757980  375543 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:26.759082  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:26.759119  375543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:26.759135  375543 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:26.759194  375543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.759237  375543 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:26.759253  375543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:26.759384  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:26.780168  375543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.780185  375543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:26.780201  375543 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.780233  375543 start.go:360] acquireMachinesLock for embed-certs-770390: {Name:mk0b160cfba8a84d98b6566219365b8df24bf5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.780296  375543 start.go:364] duration metric: took 44.736µs to acquireMachinesLock for "embed-certs-770390"
	I1205 07:06:26.780318  375543 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:26.780342  375543 fix.go:54] fixHost starting: 
	I1205 07:06:26.780580  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:26.799942  375543 fix.go:112] recreateIfNeeded on embed-certs-770390: state=Stopped err=<nil>
	W1205 07:06:26.799979  375543 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:06:23.903235  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:25.904229  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:27.904712  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:26.624904  375309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:06:26.625236  375309 start.go:159] libmachine.API.Create for "newest-cni-624263" (driver="docker")
	I1205 07:06:26.625293  375309 client.go:173] LocalClient.Create starting
	I1205 07:06:26.625440  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 07:06:26.625497  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625526  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.625585  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 07:06:26.625618  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625632  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.626063  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:06:26.645528  375309 cli_runner.go:211] docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:06:26.645637  375309 network_create.go:284] running [docker network inspect newest-cni-624263] to gather additional debugging logs...
	I1205 07:06:26.645660  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263
	W1205 07:06:26.666476  375309 cli_runner.go:211] docker network inspect newest-cni-624263 returned with exit code 1
	I1205 07:06:26.666508  375309 network_create.go:287] error running [docker network inspect newest-cni-624263]: docker network inspect newest-cni-624263: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-624263 not found
	I1205 07:06:26.666525  375309 network_create.go:289] output of [docker network inspect newest-cni-624263]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-624263 not found
	
	** /stderr **
	I1205 07:06:26.666651  375309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:26.685626  375309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d57cb024a629 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ab:20:17:d9:1a} reservation:<nil>}
	I1205 07:06:26.686333  375309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-29ce45f1f3fd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:f2:e1:5a:fb:fd} reservation:<nil>}
	I1205 07:06:26.687062  375309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-18be16a82b81 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:25:6c:b3:f6:c6} reservation:<nil>}
	I1205 07:06:26.687648  375309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-931902d22986 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:1a:d5:72:cd:51} reservation:<nil>}
	I1205 07:06:26.688156  375309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b424bb5358c0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:4c:79:ba:46:30} reservation:<nil>}
	I1205 07:06:26.688952  375309 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7252f408ef75 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:04:ba:35:24:10} reservation:<nil>}
	I1205 07:06:26.689983  375309 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020b7df0}
	I1205 07:06:26.690008  375309 network_create.go:124] attempt to create docker network newest-cni-624263 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1205 07:06:26.690065  375309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-624263 newest-cni-624263
	I1205 07:06:26.743102  375309 network_create.go:108] docker network newest-cni-624263 192.168.103.0/24 created
	I1205 07:06:26.743126  375309 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-624263" container
	I1205 07:06:26.743192  375309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:06:26.762523  375309 cli_runner.go:164] Run: docker volume create newest-cni-624263 --label name.minikube.sigs.k8s.io=newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:06:26.780448  375309 oci.go:103] Successfully created a docker volume newest-cni-624263
	I1205 07:06:26.780537  375309 cli_runner.go:164] Run: docker run --rm --name newest-cni-624263-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --entrypoint /usr/bin/test -v newest-cni-624263:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:06:27.200143  375309 oci.go:107] Successfully prepared a docker volume newest-cni-624263
	I1205 07:06:27.200209  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1205 07:06:27.200286  375309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 07:06:27.200310  375309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 07:06:27.200392  375309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:06:27.265015  375309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-624263 --name newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-624263 --network newest-cni-624263 --ip 192.168.103.2 --volume newest-cni-624263:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:06:27.552297  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Running}}
	I1205 07:06:27.573173  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.593054  375309 cli_runner.go:164] Run: docker exec newest-cni-624263 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:06:27.634139  375309 oci.go:144] the created container "newest-cni-624263" has a running status.
	I1205 07:06:27.634169  375309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa...
	I1205 07:06:27.810850  375309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:06:27.838307  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.864433  375309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:06:27.864459  375309 kic_runner.go:114] Args: [docker exec --privileged newest-cni-624263 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:06:27.914874  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.937979  375309 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.938080  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:27.957892  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.958181  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:27.958199  375309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:28.099298  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.099339  375309 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:06:28.099404  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.118216  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.118434  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.118447  375309 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:06:28.266352  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.266427  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.285381  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.285625  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.285656  375309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:28.421424  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:28.421450  375309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:28.421501  375309 ubuntu.go:190] setting up certificates
	I1205 07:06:28.421519  375309 provision.go:84] configureAuth start
	I1205 07:06:28.421570  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:28.439867  375309 provision.go:143] copyHostCerts
	I1205 07:06:28.439922  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:28.439932  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:28.439988  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:28.440064  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:28.440072  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:28.440097  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:28.440150  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:28.440157  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:28.440178  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:28.440226  375309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:06:28.490526  375309 provision.go:177] copyRemoteCerts
	I1205 07:06:28.490572  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:28.490604  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.508254  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:28.607548  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:28.626034  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:28.643274  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:28.660190  375309 provision.go:87] duration metric: took 238.65746ms to configureAuth
	I1205 07:06:28.660213  375309 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:28.660451  375309 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:06:28.660552  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.678203  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.678454  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.678473  375309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:06:28.964368  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:28.964391  375309 machine.go:97] duration metric: took 1.026387988s to provisionDockerMachine
	I1205 07:06:28.964401  375309 client.go:176] duration metric: took 2.339097815s to LocalClient.Create
	I1205 07:06:28.964417  375309 start.go:167] duration metric: took 2.339183991s to libmachine.API.Create "newest-cni-624263"
	I1205 07:06:28.964424  375309 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:06:28.964437  375309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:28.964496  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:28.964532  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.983132  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.083395  375309 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:29.086772  375309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:29.086801  375309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:29.086821  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:29.086871  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:29.086968  375309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:29.087082  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:29.094830  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:29.113691  375309 start.go:296] duration metric: took 149.256692ms for postStartSetup
	I1205 07:06:29.114008  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.132535  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:29.132800  375309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:29.132848  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.154540  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.253994  375309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:29.258256  375309 start.go:128] duration metric: took 2.638946756s to createHost
	I1205 07:06:29.258278  375309 start.go:83] releasing machines lock for "newest-cni-624263", held for 2.6390804s
	I1205 07:06:29.258357  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.275163  375309 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:29.275199  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.275243  375309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:29.275301  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.292525  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.293433  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.439694  375309 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:29.445781  375309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:29.478433  375309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:29.482835  375309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:29.482896  375309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:29.507064  375309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:06:29.507086  375309 start.go:496] detecting cgroup driver to use...
	I1205 07:06:29.507115  375309 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:29.507154  375309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:29.523263  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:29.534962  375309 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:29.535000  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:29.549931  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:29.566793  375309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:29.650059  375309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:29.736486  375309 docker.go:234] disabling docker service ...
	I1205 07:06:29.736547  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:29.754991  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:29.766663  375309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:29.846539  375309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:29.924690  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:29.936548  375309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:29.950065  375309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:29.950123  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.959781  375309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:29.959833  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.967908  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.975938  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.983900  375309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:29.991260  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.999272  375309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.012680  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.021140  375309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:30.028051  375309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:30.034722  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:30.112871  375309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:30.237839  375309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:30.237906  375309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:30.241691  375309 start.go:564] Will wait 60s for crictl version
	I1205 07:06:30.241747  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.244968  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:30.267110  375309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:30.267179  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.294236  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.323746  375309 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:06:30.324950  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:30.341782  375309 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:30.345513  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:30.356609  375309 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 07:06:28.056673  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:30.560609  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:30.357703  375309 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:30.357837  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:30.357886  375309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:30.381946  375309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:06:30.381975  375309 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:06:30.382034  375309 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.382056  375309 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.382071  375309 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.382087  375309 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.382058  375309 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.382035  375309 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.382041  375309 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.382074  375309 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383617  375309 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.383669  375309 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383686  375309 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.383611  375309 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.383775  375309 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.383990  375309 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.384965  375309 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.385843  375309 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.534923  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.535969  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.541762  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.547313  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.558484  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.574838  375309 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:06:30.574883  375309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.575084  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.578994  375309 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:06:30.579036  375309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.579087  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.587216  375309 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:06:30.587248  375309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.587287  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.601815  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637213  375309 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:06:30.637252  375309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.637293  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637309  375309 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:06:30.637355  375309 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.637389  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.637394  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637440  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.637462  375309 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:06:30.637481  375309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.637510  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.668185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.668206  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.668216  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.668196  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.668257  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.668292  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705400  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.705445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.705403  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705531  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.706185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.706239  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.739595  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:06:30.739704  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:30.741607  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741700  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741619  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.741797  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.744944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.744985  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.745064  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.746956  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:06:30.746987  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:06:30.794130  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794147  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794128  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794178  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794187  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:06:30.794196  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:06:30.794229  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794234  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794261  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:06:30.794338  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:30.836933  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:06:30.836964  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:06:30.838245  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838272  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:06:30.838338  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838364  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:06:30.857777  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952672  375309 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 07:06:30.952727  375309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952794  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.991362  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.049944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.105055  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.161810  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.161973  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.166067  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:06:31.166166  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:26.801554  375543 out.go:252] * Restarting existing docker container for "embed-certs-770390" ...
	I1205 07:06:26.801629  375543 cli_runner.go:164] Run: docker start embed-certs-770390
	I1205 07:06:27.074915  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:27.097444  375543 kic.go:430] container "embed-certs-770390" state is running.
	I1205 07:06:27.097863  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:27.118527  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:27.118771  375543 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.118869  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:27.140642  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.140903  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:27.140920  375543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:27.141707  375543 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53866->127.0.0.1:33128: read: connection reset by peer
	I1205 07:06:30.285862  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.285883  375543 ubuntu.go:182] provisioning hostname "embed-certs-770390"
	I1205 07:06:30.285963  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.306084  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.306389  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.306406  375543 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-770390 && echo "embed-certs-770390" | sudo tee /etc/hostname
	I1205 07:06:30.457639  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.457716  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.475904  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.476118  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.476140  375543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-770390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-770390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-770390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:30.618737  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:30.618762  375543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:30.618787  375543 ubuntu.go:190] setting up certificates
	I1205 07:06:30.618798  375543 provision.go:84] configureAuth start
	I1205 07:06:30.618872  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:30.637076  375543 provision.go:143] copyHostCerts
	I1205 07:06:30.637138  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:30.637151  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:30.637230  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:30.637377  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:30.637400  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:30.637449  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:30.637555  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:30.637567  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:30.637606  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:30.637698  375543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.embed-certs-770390 san=[127.0.0.1 192.168.76.2 embed-certs-770390 localhost minikube]
	I1205 07:06:30.850789  375543 provision.go:177] copyRemoteCerts
	I1205 07:06:30.850846  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:30.850878  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.870854  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:30.979857  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:31.002122  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:31.026307  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:31.050483  375543 provision.go:87] duration metric: took 431.665526ms to configureAuth
	I1205 07:06:31.050515  375543 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:31.050734  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:31.050879  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.077241  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:31.077607  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:31.077644  375543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1205 07:06:30.403214  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:32.403773  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:32.903916  366710 pod_ready.go:94] pod "coredns-7d764666f9-bvbhf" is "Ready"
	I1205 07:06:32.903942  366710 pod_ready.go:86] duration metric: took 34.00575162s for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.906601  366710 pod_ready.go:83] waiting for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.913301  366710 pod_ready.go:94] pod "etcd-no-preload-008839" is "Ready"
	I1205 07:06:32.913400  366710 pod_ready.go:86] duration metric: took 6.777304ms for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.915636  366710 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.919542  366710 pod_ready.go:94] pod "kube-apiserver-no-preload-008839" is "Ready"
	I1205 07:06:32.919566  366710 pod_ready.go:86] duration metric: took 3.909248ms for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.921563  366710 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.101533  366710 pod_ready.go:94] pod "kube-controller-manager-no-preload-008839" is "Ready"
	I1205 07:06:33.101569  366710 pod_ready.go:86] duration metric: took 179.984485ms for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.301800  366710 pod_ready.go:83] waiting for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.702088  366710 pod_ready.go:94] pod "kube-proxy-s9zn2" is "Ready"
	I1205 07:06:33.702116  366710 pod_ready.go:86] duration metric: took 400.29234ms for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:31.721865  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:31.721894  375543 machine.go:97] duration metric: took 4.603106939s to provisionDockerMachine
	I1205 07:06:31.721911  375543 start.go:293] postStartSetup for "embed-certs-770390" (driver="docker")
	I1205 07:06:31.721926  375543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:31.721985  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:31.722034  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.745060  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:31.850959  375543 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:31.854831  375543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:31.854862  375543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:31.854875  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:31.854930  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:31.855030  375543 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:31.855158  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:31.863927  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:31.883380  375543 start.go:296] duration metric: took 161.454914ms for postStartSetup
	I1205 07:06:31.883456  375543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:31.883520  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.906830  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.008279  375543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:32.013614  375543 fix.go:56] duration metric: took 5.233266702s for fixHost
	I1205 07:06:32.013639  375543 start.go:83] releasing machines lock for "embed-certs-770390", held for 5.233329197s
	I1205 07:06:32.013713  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:32.035130  375543 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:32.035191  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.035218  375543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:32.035305  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.059370  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.060657  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.825514  375543 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:32.832229  375543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:32.867423  375543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:32.872157  375543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:32.872230  375543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:32.880841  375543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:06:32.880864  375543 start.go:496] detecting cgroup driver to use...
	I1205 07:06:32.880892  375543 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:32.880945  375543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:32.897262  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:32.913628  375543 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:32.913679  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:32.931183  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:32.943212  375543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:33.031242  375543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:33.124377  375543 docker.go:234] disabling docker service ...
	I1205 07:06:33.124432  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:33.138291  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:33.150719  375543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:33.243720  375543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:33.334574  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:33.346746  375543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:33.360678  375543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:33.360741  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.369727  375543 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:33.369786  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.378916  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.387258  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.395950  375543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:33.405206  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.415134  375543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.425222  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.434369  375543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:33.442019  375543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:33.449717  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:33.543423  375543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:33.975505  375543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:33.975586  375543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:33.979949  375543 start.go:564] Will wait 60s for crictl version
	I1205 07:06:33.980033  375543 ssh_runner.go:195] Run: which crictl
	I1205 07:06:33.984307  375543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:34.008163  375543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:34.008225  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.036756  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.070974  375543 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:06:33.902396  366710 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301736  366710 pod_ready.go:94] pod "kube-scheduler-no-preload-008839" is "Ready"
	I1205 07:06:34.301762  366710 pod_ready.go:86] duration metric: took 399.341028ms for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301777  366710 pod_ready.go:40] duration metric: took 35.406378156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:34.356972  366710 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:06:34.358967  366710 out.go:179] * Done! kubectl is now configured to use "no-preload-008839" cluster and "default" namespace by default
	I1205 07:06:34.071865  375543 cli_runner.go:164] Run: docker network inspect embed-certs-770390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:34.089273  375543 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:34.093527  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.104382  375543 kubeadm.go:884] updating cluster {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:34.104493  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:34.104533  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.135986  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.136005  375543 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:06:34.136046  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.163958  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.163976  375543 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:06:34.163982  375543 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1205 07:06:34.164096  375543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-770390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:34.164159  375543 ssh_runner.go:195] Run: crio config
	I1205 07:06:34.210786  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:34.210808  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:34.210819  375543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:06:34.210839  375543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-770390 NodeName:embed-certs-770390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:34.210959  375543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-770390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:34.211023  375543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:06:34.219056  375543 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:06:34.219118  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:34.227080  375543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1205 07:06:34.239752  375543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:06:34.251999  375543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1205 07:06:34.263865  375543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:34.267417  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.277134  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:34.394783  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:34.419292  375543 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390 for IP: 192.168.76.2
	I1205 07:06:34.419313  375543 certs.go:195] generating shared ca certs ...
	I1205 07:06:34.419352  375543 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:34.419526  375543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:34.419586  375543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:34.419598  375543 certs.go:257] generating profile certs ...
	I1205 07:06:34.419694  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key
	I1205 07:06:34.419767  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e
	I1205 07:06:34.419858  375543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key
	I1205 07:06:34.420010  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:34.420057  375543 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:34.420071  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:34.420110  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:34.420143  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:34.420172  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:34.420226  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:34.421032  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:34.440844  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:34.465635  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:34.487656  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:34.511641  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 07:06:34.535311  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:34.552834  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:34.570691  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:06:34.588483  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:34.605748  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:34.624519  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:34.644092  375543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:34.657592  375543 ssh_runner.go:195] Run: openssl version
	I1205 07:06:34.663869  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.673595  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:34.683140  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688216  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688277  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.738387  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:34.748071  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.757769  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:34.767020  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770922  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770972  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.813377  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:34.823642  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.833453  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:34.841565  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846018  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846067  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.881430  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:34.888928  375543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:34.892723  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:06:34.932540  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:06:34.979914  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:06:35.029643  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:06:35.084612  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:06:35.132242  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:06:35.171706  375543 kubeadm.go:401] StartCluster: {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:35.171804  375543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:35.171880  375543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:35.202472  375543 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:06:35.202495  375543 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:06:35.202501  375543 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:06:35.202506  375543 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:06:35.202514  375543 cri.go:89] found id: ""
	I1205 07:06:35.202559  375543 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:06:35.214717  375543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:35Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:35.214778  375543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:35.223159  375543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:06:35.223177  375543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:06:35.223230  375543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:06:35.231356  375543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:06:35.232131  375543 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-770390" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.232612  375543 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-770390" cluster setting kubeconfig missing "embed-certs-770390" context setting]
	I1205 07:06:35.233423  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.235317  375543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:06:35.242634  375543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1205 07:06:35.242665  375543 kubeadm.go:602] duration metric: took 19.477371ms to restartPrimaryControlPlane
	I1205 07:06:35.242675  375543 kubeadm.go:403] duration metric: took 70.981616ms to StartCluster
	I1205 07:06:35.242690  375543 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.242761  375543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.244041  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.244259  375543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:35.244338  375543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:35.244434  375543 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-770390"
	I1205 07:06:35.244450  375543 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-770390"
	W1205 07:06:35.244462  375543 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:06:35.244471  375543 addons.go:70] Setting dashboard=true in profile "embed-certs-770390"
	I1205 07:06:35.244496  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244500  375543 addons.go:239] Setting addon dashboard=true in "embed-certs-770390"
	W1205 07:06:35.244519  375543 addons.go:248] addon dashboard should already be in state true
	I1205 07:06:35.244510  375543 addons.go:70] Setting default-storageclass=true in profile "embed-certs-770390"
	I1205 07:06:35.244540  375543 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-770390"
	I1205 07:06:35.244551  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244494  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:35.244825  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.244991  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.245043  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.247149  375543 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:35.248386  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:35.272894  375543 addons.go:239] Setting addon default-storageclass=true in "embed-certs-770390"
	W1205 07:06:35.272915  375543 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:06:35.272939  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.273400  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.275193  375543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:35.275251  375543 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:06:35.276704  375543 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.276758  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:35.276764  375543 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:06:33.056148  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:35.060453  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:31.366255  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:06:32.346995  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.184991035s)
	I1205 07:06:32.347021  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:06:32.347055  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347104  375309 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:06:32.347120  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347138  375309 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:06:32.347061  375309 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.180871282s)
	I1205 07:06:32.347169  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:06:32.347188  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:32.347192  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 07:06:33.570397  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.223258044s)
	I1205 07:06:33.570426  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:06:33.570455  375309 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570499  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570511  375309 ssh_runner.go:235] Completed: which crictl: (1.223307009s)
	I1205 07:06:33.570561  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:34.893160  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.322638807s)
	I1205 07:06:34.893187  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:06:34.893208  375309 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893215  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.322634396s)
	I1205 07:06:34.893245  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893276  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:35.276808  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.277808  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:06:35.277826  375543 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:06:35.277888  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.301215  375543 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.301315  375543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:35.301418  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.308857  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.320257  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.332128  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.426032  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:35.431462  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:06:35.431489  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:06:35.438950  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.447296  375543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:35.451227  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.451848  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:06:35.451886  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:06:35.468647  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:06:35.468668  375543 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:06:35.498954  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:06:35.498976  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:06:35.545774  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:06:35.545808  375543 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:06:35.588544  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:06:35.588570  375543 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:06:35.610093  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:06:35.610117  375543 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:06:35.644554  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:06:35.644601  375543 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:06:35.667656  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:35.667682  375543 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:06:35.688651  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:37.536634  375543 node_ready.go:49] node "embed-certs-770390" is "Ready"
	I1205 07:06:37.536671  375543 node_ready.go:38] duration metric: took 2.089351455s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:37.536687  375543 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:37.536743  375543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:38.146255  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.707271235s)
	I1205 07:06:38.146314  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.695052574s)
	I1205 07:06:38.146429  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.457746781s)
	I1205 07:06:38.146472  375543 api_server.go:72] duration metric: took 2.902184723s to wait for apiserver process to appear ...
	I1205 07:06:38.146527  375543 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:38.146554  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.147993  375543 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-770390 addons enable metrics-server
	
	I1205 07:06:38.154740  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.154761  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:38.160172  375543 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1205 07:06:37.561481  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:40.055806  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:36.440601  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.547331042s)
	I1205 07:06:36.440633  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:06:36.440654  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440666  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.547364518s)
	I1205 07:06:36.440699  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440737  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:38.061822  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.621051807s)
	I1205 07:06:38.061871  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.621152631s)
	I1205 07:06:38.061900  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:06:38.061925  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.061878  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1205 07:06:38.061986  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.062043  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:38.066235  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:06:38.066269  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:06:39.480656  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.418643669s)
	I1205 07:06:39.480686  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:06:39.480713  375309 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:39.480763  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:40.059650  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:06:40.059692  375309 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.059745  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.168218  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:06:40.168260  375309 cache_images.go:125] Successfully loaded all cached images
	I1205 07:06:40.168267  375309 cache_images.go:94] duration metric: took 9.786277822s to LoadCachedImages
	I1205 07:06:40.168281  375309 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:06:40.168392  375309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:40.168461  375309 ssh_runner.go:195] Run: crio config
	I1205 07:06:40.215126  375309 cni.go:84] Creating CNI manager for ""
	I1205 07:06:40.215148  375309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:40.215165  375309 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:06:40.215185  375309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:40.215294  375309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:40.215371  375309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.223545  375309 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:06:40.223608  375309 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:06:40.231452  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:06:40.231550  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:06:40.231600  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:06:40.231616  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:40.236450  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:06:40.236478  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:06:40.236508  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:06:40.236532  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:06:40.253269  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:06:40.289073  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:06:40.289104  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:06:40.688980  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:40.696712  375309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:06:40.710980  375309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:06:40.726034  375309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:06:40.738766  375309 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:40.742492  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:40.752230  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:40.831660  375309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:40.858130  375309 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:06:40.858175  375309 certs.go:195] generating shared ca certs ...
	I1205 07:06:40.858196  375309 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.858496  375309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:40.858561  375309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:40.858573  375309 certs.go:257] generating profile certs ...
	I1205 07:06:40.858645  375309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:06:40.858659  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt with IP's: []
	I1205 07:06:40.893856  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt ...
	I1205 07:06:40.893898  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt: {Name:mk2b6195b99d5e275f660429f3814d5bdcd8191d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894105  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key ...
	I1205 07:06:40.894140  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key: {Name:mke407b69941bd64dfca0f6ab1c80bb1c45b93ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894275  375309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:06:40.894306  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1205 07:06:40.941482  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada ...
	I1205 07:06:40.941507  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada: {Name:mk677ad869a55b9090eb744dc3feff29e8064497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941661  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada ...
	I1205 07:06:40.941680  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada: {Name:mkb7c70fb23c29d27bdcbb21d4add4953a296250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941769  375309 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt
	I1205 07:06:40.941862  375309 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key
	I1205 07:06:40.941930  375309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:06:40.941945  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt with IP's: []
	I1205 07:06:41.076769  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt ...
	I1205 07:06:41.076794  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt: {Name:mke1ae4d7cafe67dff134743b1bfeb82268bc450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.076927  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key ...
	I1205 07:06:41.076940  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key: {Name:mk11a3d7395501747e70db233d7500d344284191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.077110  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:41.077146  375309 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:41.077156  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:41.077191  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:41.077216  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:41.077245  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:41.077285  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:41.077869  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:41.097495  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:41.114088  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:41.131277  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:41.148175  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:06:41.168203  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:41.190211  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:38.161254  375543 addons.go:530] duration metric: took 2.916934723s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:06:38.647484  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.654056  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.654081  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:39.147586  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:39.152741  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1205 07:06:39.153911  375543 api_server.go:141] control plane version: v1.34.2
	I1205 07:06:39.153938  375543 api_server.go:131] duration metric: took 1.007398463s to wait for apiserver health ...
	I1205 07:06:39.153949  375543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:39.158877  375543 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:39.158918  375543 system_pods.go:61] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.158931  375543 system_pods.go:61] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.158944  375543 system_pods.go:61] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.158959  375543 system_pods.go:61] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.158971  375543 system_pods.go:61] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.158984  375543 system_pods.go:61] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.158989  375543 system_pods.go:61] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.158999  375543 system_pods.go:61] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.159007  375543 system_pods.go:74] duration metric: took 5.050804ms to wait for pod list to return data ...
	I1205 07:06:39.159021  375543 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:39.161392  375543 default_sa.go:45] found service account: "default"
	I1205 07:06:39.161413  375543 default_sa.go:55] duration metric: took 2.38628ms for default service account to be created ...
	I1205 07:06:39.161420  375543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:06:39.163935  375543 system_pods.go:86] 8 kube-system pods found
	I1205 07:06:39.163966  375543 system_pods.go:89] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.163978  375543 system_pods.go:89] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.163992  375543 system_pods.go:89] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.164005  375543 system_pods.go:89] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.164016  375543 system_pods.go:89] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.164027  375543 system_pods.go:89] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.164038  375543 system_pods.go:89] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.164055  375543 system_pods.go:89] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.164067  375543 system_pods.go:126] duration metric: took 2.64117ms to wait for k8s-apps to be running ...
	I1205 07:06:39.164079  375543 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:06:39.164127  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:39.181008  375543 system_svc.go:56] duration metric: took 16.921756ms WaitForService to wait for kubelet
	I1205 07:06:39.181041  375543 kubeadm.go:587] duration metric: took 3.936753325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:39.181064  375543 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:39.184000  375543 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:39.184034  375543 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:39.184053  375543 node_conditions.go:105] duration metric: took 2.982688ms to run NodePressure ...
	I1205 07:06:39.184070  375543 start.go:242] waiting for startup goroutines ...
	I1205 07:06:39.184085  375543 start.go:247] waiting for cluster config update ...
	I1205 07:06:39.184102  375543 start.go:256] writing updated cluster config ...
	I1205 07:06:39.193568  375543 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:39.197314  375543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:39.200374  375543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:41.204973  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:41.212073  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:06:41.231583  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:41.253120  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:41.272824  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:41.292610  375309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:41.308462  375309 ssh_runner.go:195] Run: openssl version
	I1205 07:06:41.316714  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.325091  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:41.332343  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336139  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336194  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.372232  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.379524  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.386631  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.393737  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:41.401581  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405466  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405515  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.439825  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:41.447189  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:06:41.455927  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.463164  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:41.470435  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.473992  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.474034  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.515208  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:41.525475  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0
	I1205 07:06:41.535050  375309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:41.540368  375309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:06:41.540428  375309 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:41.540520  375309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:41.540579  375309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:41.574193  375309 cri.go:89] found id: ""
	I1205 07:06:41.574260  375309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:41.582447  375309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:06:41.590634  375309 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:06:41.590683  375309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:06:41.598032  375309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:06:41.598048  375309 kubeadm.go:158] found existing configuration files:
	
	I1205 07:06:41.598083  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:06:41.605848  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:06:41.605900  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:06:41.613213  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:06:41.620371  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:06:41.620417  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:06:41.627391  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.634542  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:06:41.634592  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.641338  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:06:41.648894  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:06:41.648944  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:06:41.656607  375309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:06:41.696598  375309 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:06:41.696706  375309 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:06:41.759716  375309 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:06:41.759824  375309 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 07:06:41.759883  375309 kubeadm.go:319] OS: Linux
	I1205 07:06:41.759954  375309 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:06:41.760020  375309 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:06:41.760091  375309 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:06:41.760146  375309 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:06:41.760192  375309 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:06:41.760252  375309 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:06:41.760365  375309 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:06:41.760434  375309 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 07:06:41.814175  375309 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:06:41.814315  375309 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:06:41.814467  375309 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:06:41.827236  375309 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:06:41.830237  375309 out.go:252]   - Generating certificates and keys ...
	I1205 07:06:41.830391  375309 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:06:41.830478  375309 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:06:41.861271  375309 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:06:42.094457  375309 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:06:42.144264  375309 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:06:42.276913  375309 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:06:42.446846  375309 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:06:42.447034  375309 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.609304  375309 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:06:42.609696  375309 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.767082  375309 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:06:43.048880  375309 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:06:43.119451  375309 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:06:43.119727  375309 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:06:43.389014  375309 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:06:43.643799  375309 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:06:43.853126  375309 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:06:44.168810  375309 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:06:44.219881  375309 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:06:44.220746  375309 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:06:44.227994  375309 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1205 07:06:42.556667  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:44.557029  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:44.229477  375309 out.go:252]   - Booting up control plane ...
	I1205 07:06:44.229641  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:06:44.229761  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:06:44.230667  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:06:44.249377  375309 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:06:44.249530  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:06:44.258992  375309 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:06:44.259591  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:06:44.259660  375309 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:06:44.400746  375309 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:06:44.400911  375309 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:06:45.401590  375309 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00117802s
	I1205 07:06:45.405602  375309 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:06:45.405744  375309 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1205 07:06:45.405949  375309 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:06:45.406099  375309 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1205 07:06:43.207479  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:45.732411  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:46.416593  375309 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.010733066s
	I1205 07:06:47.437314  375309 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.031843502s
	I1205 07:06:49.407519  375309 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00206161s
	I1205 07:06:49.424839  375309 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:06:49.434626  375309 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:06:49.444666  375309 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:06:49.444989  375309 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-624263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:06:49.453496  375309 kubeadm.go:319] [bootstrap-token] Using token: 6cz87l.2zljzwp80f64fvtx
	W1205 07:06:47.055999  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:49.054841  369138 pod_ready.go:94] pod "coredns-66bc5c9577-lzlm8" is "Ready"
	I1205 07:06:49.054862  369138 pod_ready.go:86] duration metric: took 36.004755066s for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.057541  369138 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.061520  369138 pod_ready.go:94] pod "etcd-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.061544  369138 pod_ready.go:86] duration metric: took 3.984636ms for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.063582  369138 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.067353  369138 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.067370  369138 pod_ready.go:86] duration metric: took 3.767456ms for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.069303  369138 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.254115  369138 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.254136  369138 pod_ready.go:86] duration metric: took 184.787953ms for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.461655  369138 pod_ready.go:83] waiting for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.857656  369138 pod_ready.go:94] pod "kube-proxy-fpss6" is "Ready"
	I1205 07:06:49.857685  369138 pod_ready.go:86] duration metric: took 396.007735ms for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.055882  369138 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.453368  369138 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:50.453396  369138 pod_ready.go:86] duration metric: took 397.4857ms for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.453413  369138 pod_ready.go:40] duration metric: took 37.406615801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:50.507622  369138 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:06:50.544152  369138 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-172186" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.067334184Z" level=info msg="Started container" PID=1741 containerID=be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper id=fc2097e5-2162-40c0-9c21-12f6f3a4bbf6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fffdf3b91e706cc7f50911009201999a02a1aa0fa55d1541d5d34a7d6dc529
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.105141132Z" level=info msg="Removing container: fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab" id=679f3d13-259d-43f0-b2a5-1376e82a80a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:19 no-preload-008839 crio[568]: time="2025-12-05T07:06:19.116376604Z" level=info msg="Removed container fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=679f3d13-259d-43f0-b2a5-1376e82a80a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.13131289Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=35a17115-95e6-47b2-9e96-e52cacc4c075 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.132359757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51bf6922-1320-4546-852e-3c8db2f54541 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.133524597Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4f88e49d-8e74-4aa4-b145-129235ffc7dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.133664972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137731586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137865193Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/607764f379c5c9d369ecf353f9d5deaecdb446689c6a9700bf943f17565851c8/merged/etc/passwd: no such file or directory"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.137889253Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/607764f379c5c9d369ecf353f9d5deaecdb446689c6a9700bf943f17565851c8/merged/etc/group: no such file or directory"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.138079466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.164394854Z" level=info msg="Created container 8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6: kube-system/storage-provisioner/storage-provisioner" id=4f88e49d-8e74-4aa4-b145-129235ffc7dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.164980358Z" level=info msg="Starting container: 8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6" id=68be4556-334e-48fb-93d6-ce8ec979900b name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:29 no-preload-008839 crio[568]: time="2025-12-05T07:06:29.167126435Z" level=info msg="Started container" PID=1756 containerID=8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6 description=kube-system/storage-provisioner/storage-provisioner id=68be4556-334e-48fb-93d6-ce8ec979900b name=/runtime.v1.RuntimeService/StartContainer sandboxID=086bd8c723c626a8a55dad439fb64c41d101f88e90a9e8124fcbc802653232ef
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.019786187Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=37cf8cf3-8efe-4594-bc91-c0a5408afde7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.02068776Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3564dc5-c2e0-474b-8c0e-28484104ce4f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.021747444Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=2b911702-7c37-4a23-906d-c258e73a17bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.021884832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.027486444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.02798781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.064086838Z" level=info msg="Created container 796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=2b911702-7c37-4a23-906d-c258e73a17bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.064733228Z" level=info msg="Starting container: 796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae" id=0ed12475-668e-488a-8d92-4bf60ccc5568 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.066854478Z" level=info msg="Started container" PID=1794 containerID=796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper id=0ed12475-668e-488a-8d92-4bf60ccc5568 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fffdf3b91e706cc7f50911009201999a02a1aa0fa55d1541d5d34a7d6dc529
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.16803193Z" level=info msg="Removing container: be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded" id=aa9d1c77-c381-48e0-9306-e2a68ab136d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:41 no-preload-008839 crio[568]: time="2025-12-05T07:06:41.178742771Z" level=info msg="Removed container be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm/dashboard-metrics-scraper" id=aa9d1c77-c381-48e0-9306-e2a68ab136d0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	796166f8aad13       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   67fffdf3b91e7       dashboard-metrics-scraper-867fb5f87b-nqpzm   kubernetes-dashboard
	8af45e76145b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   086bd8c723c62       storage-provisioner                          kube-system
	c24118d3ceb70       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   95879088ec6b1       kubernetes-dashboard-b84665fb8-cwnkq         kubernetes-dashboard
	d5679f317a432       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   12497d4ae7c07       coredns-7d764666f9-bvbhf                     kube-system
	1a8a87158e5ee       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   3daccd5763d6a       busybox                                      default
	041ee86966827       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           52 seconds ago      Running             kube-proxy                  0                   4147cce926d40       kube-proxy-s9zn2                             kube-system
	2073d619fdee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   086bd8c723c62       storage-provisioner                          kube-system
	eba75d1119200       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   87f8473c4e891       kindnet-k65q9                                kube-system
	6a724b46320af       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           55 seconds ago      Running             kube-apiserver              0                   0bd62f7ea060c       kube-apiserver-no-preload-008839             kube-system
	594bd97237274       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           55 seconds ago      Running             kube-scheduler              0                   98b924fd0d6ee       kube-scheduler-no-preload-008839             kube-system
	be81b724a08e3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           55 seconds ago      Running             kube-controller-manager     0                   39dc00d7688a8       kube-controller-manager-no-preload-008839    kube-system
	db01c7251a1de       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   7f0477a8eef8f       etcd-no-preload-008839                       kube-system
	
	
	==> coredns [d5679f317a43257700a6ccf786a90e51b3e511459a6a40b7b87ce098fef9f917] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47478 - 48259 "HINFO IN 5639324877831771745.7104423327596062010. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078795035s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-008839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-008839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=no-preload-008839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-008839
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:04:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:06:27 +0000   Fri, 05 Dec 2025 07:05:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-008839
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                fb2974e4-0c42-4f11-b1e5-d1c92fcbd635
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-bvbhf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-008839                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-k65q9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-008839              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-008839     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-s9zn2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-008839              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nqpzm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cwnkq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-008839 event: Registered Node no-preload-008839 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-008839 event: Registered Node no-preload-008839 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [db01c7251a1de792a86f18e9816a7049b81ed772e45d77eb735784deca6ba7ed] <==
	{"level":"warn","ts":"2025-12-05T07:05:56.747821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.753982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.762620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.768624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.774993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.781719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.787954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.794117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.800840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.808164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.819733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.832361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.838905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.845676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.853086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.860515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.866791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.873261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.880175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.888466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.905360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.911303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.917300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.924347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:05:56.971615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:06:51 up  1:49,  0 user,  load average: 3.59, 3.31, 2.26
	Linux no-preload-008839 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eba75d111920093803e4d959a724517ca2eb3568d86480365967a5d7db5ff7c7] <==
	I1205 07:05:58.652746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:05:58.652998       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:05:58.653162       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:05:58.653178       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:05:58.653200       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:05:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:05:58.762852       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:05:58.762902       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:05:58.762919       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:05:58.852557       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:05:59.252277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:05:59.252310       1 metrics.go:72] Registering metrics
	I1205 07:05:59.252391       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:08.763019       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:08.763101       1 main.go:301] handling current node
	I1205 07:06:18.763006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:18.763041       1 main.go:301] handling current node
	I1205 07:06:28.763766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:28.763826       1 main.go:301] handling current node
	I1205 07:06:38.768495       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:38.768558       1 main.go:301] handling current node
	I1205 07:06:48.769435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:06:48.769484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6a724b46320af3fc8ab17876c05bc17339d6f6ecdfe81d092e5183ab79c4eff0] <==
	I1205 07:05:57.429843       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:05:57.429850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:05:57.429856       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:05:57.430078       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.430136       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:05:57.430287       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.430430       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 07:05:57.430669       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:05:57.431790       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:57.433022       1 policy_source.go:248] refreshing policies
	E1205 07:05:57.436497       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 07:05:57.439055       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:05:57.444187       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:05:57.667808       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:05:57.695603       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:05:57.711160       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:05:57.717702       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:05:57.723421       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:05:57.750303       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.29.76"}
	I1205 07:05:57.760459       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.174.171"}
	I1205 07:05:58.334106       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:06:00.973937       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:06:01.022313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:01.022313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:01.272835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [be81b724a08e37b312d3b403f0b0b16774c9d6683375247cd1da277090b0bb4c] <==
	I1205 07:06:00.574298       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:00.574304       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573886       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573915       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573896       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573921       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573900       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573914       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573932       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.574542       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573927       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573878       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573936       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.573825       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.574670       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.575171       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1205 07:06:00.575280       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.575446       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-008839"
	I1205 07:06:00.575529       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1205 07:06:00.583387       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:00.587724       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.674133       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:00.674151       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:06:00.674155       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:06:00.684505       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [041ee86966827d1886c5681f5cc5a2513966eb3b32160dabab858784a89fb062] <==
	I1205 07:05:58.436526       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:05:58.504768       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:58.605123       1 shared_informer.go:377] "Caches are synced"
	I1205 07:05:58.605162       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:05:58.605255       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:05:58.623395       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:05:58.623434       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:05:58.628652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:05:58.628992       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:05:58.629011       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:58.630379       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:05:58.630446       1 config.go:200] "Starting service config controller"
	I1205 07:05:58.630464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:05:58.630473       1 config.go:309] "Starting node config controller"
	I1205 07:05:58.630481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:05:58.630447       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:05:58.630488       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:05:58.630412       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:05:58.630496       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:05:58.731175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:05:58.731189       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:05:58.731204       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [594bd97237274f1209e2fd22044fdd8fa87336d8f65f7ae5ab3d67cbd890b73e] <==
	I1205 07:05:55.704918       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:05:57.352604       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:05:57.352660       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:05:57.352672       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:05:57.352682       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:05:57.386741       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1205 07:05:57.386841       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:05:57.391163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:05:57.391200       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:05:57.391357       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:05:57.391513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:05:57.491334       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:06:14 no-preload-008839 kubelet[719]: E1205 07:06:14.090571     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-008839" containerName="kube-apiserver"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: E1205 07:06:17.568207     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: I1205 07:06:17.568679     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:17 no-preload-008839 kubelet[719]: E1205 07:06:17.568915     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.018984     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.019037     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.103899     719 scope.go:122] "RemoveContainer" containerID="fdd1cb5f31c58dac4c760ce02d6a59df0ec2bcc83c0378b6ae415d603be441ab"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.104190     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: I1205 07:06:19.104232     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:19 no-preload-008839 kubelet[719]: E1205 07:06:19.104464     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: E1205 07:06:27.567688     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: I1205 07:06:27.567730     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:27 no-preload-008839 kubelet[719]: E1205 07:06:27.567934     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:29 no-preload-008839 kubelet[719]: I1205 07:06:29.130840     719 scope.go:122] "RemoveContainer" containerID="2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6"
	Dec 05 07:06:32 no-preload-008839 kubelet[719]: E1205 07:06:32.412426     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bvbhf" containerName="coredns"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.019196     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.019232     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.166220     719 scope.go:122] "RemoveContainer" containerID="be97e290df2cab3326818f8d41a84f164d838c1377acd5d9d120699e70718ded"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.166528     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" containerName="dashboard-metrics-scraper"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: I1205 07:06:41.166569     719 scope.go:122] "RemoveContainer" containerID="796166f8aad13441c74286600e5c5677a2b5eba98fdeab6868ca91391ba0acae"
	Dec 05 07:06:41 no-preload-008839 kubelet[719]: E1205 07:06:41.166852     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nqpzm_kubernetes-dashboard(7c68918c-1f80-45c6-869d-8d2e029ad1c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nqpzm" podUID="7c68918c-1f80-45c6-869d-8d2e029ad1c1"
	Dec 05 07:06:46 no-preload-008839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:06:46 no-preload-008839 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:06:46 no-preload-008839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:06:46 no-preload-008839 systemd[1]: kubelet.service: Consumed 1.674s CPU time.
	
	
	==> kubernetes-dashboard [c24118d3ceb705dfa27fd02fb7a78d52069c473b9d07b42ae3776ce72626c519] <==
	2025/12/05 07:06:04 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:04 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:04 Using secret token for csrf signing
	2025/12/05 07:06:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/05 07:06:04 Generating JWE encryption key
	2025/12/05 07:06:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:05 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:05 Creating in-cluster Sidecar client
	2025/12/05 07:06:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:05 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:04 Starting overwatch
	
	
	==> storage-provisioner [2073d619fdee4927ee6cab8da5025189478e4d40ae7780f71aca88691a55b2b6] <==
	I1205 07:05:58.403691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:06:28.406738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8af45e76145b51d65ed14c70da6520dfd018963f659d331d682adfa4562184a6] <==
	I1205 07:06:29.179998       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:06:29.187381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:06:29.187430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:06:29.189221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:32.644087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:36.905438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:40.504477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:43.558639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.581727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.587984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:06:46.588221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:06:46.588824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60a68084-c5d5-49bc-8273-b0880be31ea1", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24 became leader
	I1205 07:06:46.588900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24!
	W1205 07:06:46.592311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.598837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:06:46.689525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-008839_62555777-06e5-4b9c-9f53-9eb4e8d0fe24!
	W1205 07:06:48.603826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:48.608436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:50.611958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:50.630044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008839 -n no-preload-008839
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008839 -n no-preload-008839: exit status 2 (328.987499ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-008839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.971565ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-624263
helpers_test.go:243: (dbg) docker inspect newest-cni-624263:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	        "Created": "2025-12-05T07:06:27.282785748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:06:27.323921445Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hosts",
	        "LogPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384-json.log",
	        "Name": "/newest-cni-624263",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-624263:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-624263",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	                "LowerDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-624263",
	                "Source": "/var/lib/docker/volumes/newest-cni-624263/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-624263",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-624263",
	                "name.minikube.sigs.k8s.io": "newest-cni-624263",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f19e5b85e85a80f2658de219689930c88c87be16b451ef0377704cb500c9a15a",
	            "SandboxKey": "/var/run/docker/netns/f19e5b85e85a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-624263": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94030ad1138d9a442ae2471b64631306bc41b223df756631ceb53e7e7a11b469",
	                    "EndpointID": "cff497d8594b4cbbeb6c5f0cd42522e8f1a8ab772272377e5f6970bc1e3e4ba3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "66:74:f8:e4:6b:be",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-624263",
	                        "4f54f5052bf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-624263 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p old-k8s-version-874709 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-008839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p no-preload-008839 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:06:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:06:26.588234  375543 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:26.588509  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588519  375543 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:26.588525  375543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:26.588695  375543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:06:26.589115  375543 out.go:368] Setting JSON to false
	I1205 07:06:26.590262  375543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6531,"bootTime":1764911856,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:06:26.590314  375543 start.go:143] virtualization: kvm guest
	I1205 07:06:26.592067  375543 out.go:179] * [embed-certs-770390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:06:26.593635  375543 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:06:26.593659  375543 notify.go:221] Checking for updates...
	I1205 07:06:26.595966  375543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:06:26.597221  375543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:26.598431  375543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:06:26.599882  375543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:06:26.601166  375543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:06:26.384025  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:26.384217  375309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.408220  375309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.408239  375309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:06:26.412289  375309 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:06:26.618671  375309 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:06:26.618857  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:26.618897  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json: {Name:mk1a3d1498588cc35fd8c475060bbc66ec8b6678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:26.618949  375309 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618987  375309 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.618994  375309 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:06:26.619036  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 62.095µs
	I1205 07:06:26.618958  375309 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619060  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:06:26.619047  375309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.433µs
	I1205 07:06:26.619070  375309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:06:26.618954  375309 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619075  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:06:26.619080  375309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 78.568µs
	I1205 07:06:26.619083  375309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 131.383µs
	I1205 07:06:26.619092  375309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:06:26.619073  375309 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.619100  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:06:26.619101  375309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619062  375309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619093  375309 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619107  375309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 163.978µs
	I1205 07:06:26.619116  375309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619122  375309 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619139  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:06:26.619147  375309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 56.825µs
	I1205 07:06:26.619164  375309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:06:26.619187  375309 start.go:364] duration metric: took 54.102µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:06:26.619178  375309 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619209  375309 start.go:93] Provisioning new machine with config: &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:26.619295  375309 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:06:26.619290  375309 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.619407  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:06:26.619430  375309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 331.673µs
	I1205 07:06:26.619447  375309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:06:26.619268  375309 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:06:26.619462  375309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 475.67µs
	I1205 07:06:26.619474  375309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:06:26.619482  375309 cache.go:87] Successfully saved all images to host disk.
	I1205 07:06:26.602620  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:26.603160  375543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:06:26.627216  375543 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:06:26.627376  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.688879  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.678958971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.689006  375543 docker.go:319] overlay module found
	I1205 07:06:26.690710  375543 out.go:179] * Using the docker driver based on existing profile
	I1205 07:06:26.691897  375543 start.go:309] selected driver: docker
	I1205 07:06:26.691911  375543 start.go:927] validating driver "docker" against &{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.692006  375543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:06:26.692563  375543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:06:26.753344  375543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-05 07:06:26.743404439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:06:26.753715  375543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:26.753753  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:26.753817  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:26.753868  375543 start.go:353] cluster config:
	{Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:26.755544  375543 out.go:179] * Starting "embed-certs-770390" primary control-plane node in "embed-certs-770390" cluster
	I1205 07:06:26.756738  375543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:06:26.757980  375543 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:06:26.759082  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:26.759119  375543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:06:26.759135  375543 cache.go:65] Caching tarball of preloaded images
	I1205 07:06:26.759194  375543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:06:26.759237  375543 preload.go:238] Found /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:06:26.759253  375543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:06:26.759384  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:26.780168  375543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:06:26.780185  375543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:06:26.780201  375543 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:06:26.780233  375543 start.go:360] acquireMachinesLock for embed-certs-770390: {Name:mk0b160cfba8a84d98b6566219365b8df24bf5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:06:26.780296  375543 start.go:364] duration metric: took 44.736µs to acquireMachinesLock for "embed-certs-770390"
	I1205 07:06:26.780318  375543 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:06:26.780342  375543 fix.go:54] fixHost starting: 
	I1205 07:06:26.780580  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:26.799942  375543 fix.go:112] recreateIfNeeded on embed-certs-770390: state=Stopped err=<nil>
	W1205 07:06:26.799979  375543 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:06:23.903235  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:25.904229  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:27.904712  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:26.624904  375309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:06:26.625236  375309 start.go:159] libmachine.API.Create for "newest-cni-624263" (driver="docker")
	I1205 07:06:26.625293  375309 client.go:173] LocalClient.Create starting
	I1205 07:06:26.625440  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem
	I1205 07:06:26.625497  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625526  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.625585  375309 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem
	I1205 07:06:26.625618  375309 main.go:143] libmachine: Decoding PEM data...
	I1205 07:06:26.625632  375309 main.go:143] libmachine: Parsing certificate...
	I1205 07:06:26.626063  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:06:26.645528  375309 cli_runner.go:211] docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:06:26.645637  375309 network_create.go:284] running [docker network inspect newest-cni-624263] to gather additional debugging logs...
	I1205 07:06:26.645660  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263
	W1205 07:06:26.666476  375309 cli_runner.go:211] docker network inspect newest-cni-624263 returned with exit code 1
	I1205 07:06:26.666508  375309 network_create.go:287] error running [docker network inspect newest-cni-624263]: docker network inspect newest-cni-624263: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-624263 not found
	I1205 07:06:26.666525  375309 network_create.go:289] output of [docker network inspect newest-cni-624263]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-624263 not found
	
	** /stderr **
	I1205 07:06:26.666651  375309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:26.685626  375309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d57cb024a629 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ab:20:17:d9:1a} reservation:<nil>}
	I1205 07:06:26.686333  375309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-29ce45f1f3fd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:f2:e1:5a:fb:fd} reservation:<nil>}
	I1205 07:06:26.687062  375309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-18be16a82b81 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:25:6c:b3:f6:c6} reservation:<nil>}
	I1205 07:06:26.687648  375309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-931902d22986 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:1a:d5:72:cd:51} reservation:<nil>}
	I1205 07:06:26.688156  375309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b424bb5358c0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:4c:79:ba:46:30} reservation:<nil>}
	I1205 07:06:26.688952  375309 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7252f408ef75 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:04:ba:35:24:10} reservation:<nil>}
	I1205 07:06:26.689983  375309 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020b7df0}
	I1205 07:06:26.690008  375309 network_create.go:124] attempt to create docker network newest-cni-624263 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1205 07:06:26.690065  375309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-624263 newest-cni-624263
	I1205 07:06:26.743102  375309 network_create.go:108] docker network newest-cni-624263 192.168.103.0/24 created
	I1205 07:06:26.743126  375309 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-624263" container
	I1205 07:06:26.743192  375309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:06:26.762523  375309 cli_runner.go:164] Run: docker volume create newest-cni-624263 --label name.minikube.sigs.k8s.io=newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:06:26.780448  375309 oci.go:103] Successfully created a docker volume newest-cni-624263
	I1205 07:06:26.780537  375309 cli_runner.go:164] Run: docker run --rm --name newest-cni-624263-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --entrypoint /usr/bin/test -v newest-cni-624263:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:06:27.200143  375309 oci.go:107] Successfully prepared a docker volume newest-cni-624263
	I1205 07:06:27.200209  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1205 07:06:27.200286  375309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1205 07:06:27.200310  375309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1205 07:06:27.200392  375309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:06:27.265015  375309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-624263 --name newest-cni-624263 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-624263 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-624263 --network newest-cni-624263 --ip 192.168.103.2 --volume newest-cni-624263:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:06:27.552297  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Running}}
	I1205 07:06:27.573173  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.593054  375309 cli_runner.go:164] Run: docker exec newest-cni-624263 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:06:27.634139  375309 oci.go:144] the created container "newest-cni-624263" has a running status.
	I1205 07:06:27.634169  375309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa...
	I1205 07:06:27.810850  375309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:06:27.838307  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.864433  375309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:06:27.864459  375309 kic_runner.go:114] Args: [docker exec --privileged newest-cni-624263 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:06:27.914874  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:27.937979  375309 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.938080  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:27.957892  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.958181  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:27.958199  375309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:28.099298  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.099339  375309 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:06:28.099404  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.118216  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.118434  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.118447  375309 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:06:28.266352  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:06:28.266427  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.285381  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.285625  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.285656  375309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:28.421424  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:28.421450  375309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:28.421501  375309 ubuntu.go:190] setting up certificates
	I1205 07:06:28.421519  375309 provision.go:84] configureAuth start
	I1205 07:06:28.421570  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:28.439867  375309 provision.go:143] copyHostCerts
	I1205 07:06:28.439922  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:28.439932  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:28.439988  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:28.440064  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:28.440072  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:28.440097  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:28.440150  375309 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:28.440157  375309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:28.440178  375309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:28.440226  375309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:06:28.490526  375309 provision.go:177] copyRemoteCerts
	I1205 07:06:28.490572  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:28.490604  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.508254  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:28.607548  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:28.626034  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:28.643274  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:28.660190  375309 provision.go:87] duration metric: took 238.65746ms to configureAuth
	I1205 07:06:28.660213  375309 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:28.660451  375309 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:06:28.660552  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.678203  375309 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:28.678454  375309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:06:28.678473  375309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:06:28.964368  375309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:28.964391  375309 machine.go:97] duration metric: took 1.026387988s to provisionDockerMachine
	I1205 07:06:28.964401  375309 client.go:176] duration metric: took 2.339097815s to LocalClient.Create
	I1205 07:06:28.964417  375309 start.go:167] duration metric: took 2.339183991s to libmachine.API.Create "newest-cni-624263"
	I1205 07:06:28.964424  375309 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:06:28.964437  375309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:28.964496  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:28.964532  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:28.983132  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.083395  375309 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:29.086772  375309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:29.086801  375309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:29.086821  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:29.086871  375309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:29.086968  375309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:29.087082  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:29.094830  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:29.113691  375309 start.go:296] duration metric: took 149.256692ms for postStartSetup
	I1205 07:06:29.114008  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.132535  375309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:06:29.132800  375309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:29.132848  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.154540  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.253994  375309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:29.258256  375309 start.go:128] duration metric: took 2.638946756s to createHost
	I1205 07:06:29.258278  375309 start.go:83] releasing machines lock for "newest-cni-624263", held for 2.6390804s
	I1205 07:06:29.258357  375309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:06:29.275163  375309 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:29.275199  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.275243  375309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:29.275301  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:29.292525  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.293433  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:29.439694  375309 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:29.445781  375309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:29.478433  375309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:29.482835  375309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:29.482896  375309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:29.507064  375309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:06:29.507086  375309 start.go:496] detecting cgroup driver to use...
	I1205 07:06:29.507115  375309 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:29.507154  375309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:29.523263  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:29.534962  375309 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:29.535000  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:29.549931  375309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:29.566793  375309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:29.650059  375309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:29.736486  375309 docker.go:234] disabling docker service ...
	I1205 07:06:29.736547  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:29.754991  375309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:29.766663  375309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:29.846539  375309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:29.924690  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:29.936548  375309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:29.950065  375309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:29.950123  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.959781  375309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:29.959833  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.967908  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.975938  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.983900  375309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:29.991260  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:29.999272  375309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.012680  375309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:30.021140  375309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:30.028051  375309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:30.034722  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:30.112871  375309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:30.237839  375309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:30.237906  375309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:30.241691  375309 start.go:564] Will wait 60s for crictl version
	I1205 07:06:30.241747  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.244968  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:30.267110  375309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:30.267179  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.294236  375309 ssh_runner.go:195] Run: crio --version
	I1205 07:06:30.323746  375309 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:06:30.324950  375309 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:30.341782  375309 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:30.345513  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:30.356609  375309 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 07:06:28.056673  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:30.560609  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:30.357703  375309 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:30.357837  375309 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:06:30.357886  375309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:30.381946  375309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:06:30.381975  375309 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:06:30.382034  375309 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.382056  375309 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.382071  375309 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.382087  375309 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.382058  375309 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.382035  375309 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.382041  375309 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.382074  375309 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383617  375309 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.383669  375309 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.383686  375309 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.383611  375309 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.383775  375309 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.383990  375309 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:06:30.384965  375309 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.385843  375309 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.534923  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.535969  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.541762  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.547313  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.558484  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.574838  375309 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:06:30.574883  375309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.575084  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.578994  375309 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:06:30.579036  375309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.579087  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.587216  375309 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:06:30.587248  375309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.587287  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.601815  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637213  375309 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:06:30.637252  375309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.637293  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637309  375309 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:06:30.637355  375309 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.637389  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.637394  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.637440  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.637462  375309 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:06:30.637481  375309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.637445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.637510  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.668185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.668206  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.668216  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.668196  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.668257  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.668292  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705400  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.705445  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:06:30.705403  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:06:30.705531  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.706185  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.706239  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:06:30.739595  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:06:30.739704  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:30.741607  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741700  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:30.741619  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:06:30.741797  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:06:30.744944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:06:30.744985  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.745064  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:30.746956  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:06:30.746987  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:06:30.794130  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794147  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794128  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794178  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.794187  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:06:30.794196  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:06:30.794229  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:30.794234  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:30.794261  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:06:30.794338  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:30.836933  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:06:30.836964  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:06:30.838245  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838272  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:06:30.838338  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:06:30.838364  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:06:30.857777  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952672  375309 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 07:06:30.952727  375309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:30.952794  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:30.991362  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.049944  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.105055  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:31.161810  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.161973  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:06:31.166067  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:06:31.166166  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:26.801554  375543 out.go:252] * Restarting existing docker container for "embed-certs-770390" ...
	I1205 07:06:26.801629  375543 cli_runner.go:164] Run: docker start embed-certs-770390
	I1205 07:06:27.074915  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:27.097444  375543 kic.go:430] container "embed-certs-770390" state is running.
	I1205 07:06:27.097863  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:27.118527  375543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/config.json ...
	I1205 07:06:27.118771  375543 machine.go:94] provisionDockerMachine start ...
	I1205 07:06:27.118869  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:27.140642  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:27.140903  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:27.140920  375543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:06:27.141707  375543 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53866->127.0.0.1:33128: read: connection reset by peer
	I1205 07:06:30.285862  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.285883  375543 ubuntu.go:182] provisioning hostname "embed-certs-770390"
	I1205 07:06:30.285963  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.306084  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.306389  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.306406  375543 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-770390 && echo "embed-certs-770390" | sudo tee /etc/hostname
	I1205 07:06:30.457639  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-770390
	
	I1205 07:06:30.457716  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.475904  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:30.476118  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:30.476140  375543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-770390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-770390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-770390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:06:30.618737  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:06:30.618762  375543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:06:30.618787  375543 ubuntu.go:190] setting up certificates
	I1205 07:06:30.618798  375543 provision.go:84] configureAuth start
	I1205 07:06:30.618872  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:30.637076  375543 provision.go:143] copyHostCerts
	I1205 07:06:30.637138  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:06:30.637151  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:06:30.637230  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:06:30.637377  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:06:30.637400  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:06:30.637449  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:06:30.637555  375543 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:06:30.637567  375543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:06:30.637606  375543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:06:30.637698  375543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.embed-certs-770390 san=[127.0.0.1 192.168.76.2 embed-certs-770390 localhost minikube]
	I1205 07:06:30.850789  375543 provision.go:177] copyRemoteCerts
	I1205 07:06:30.850846  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:06:30.850878  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:30.870854  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:30.979857  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:06:31.002122  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:06:31.026307  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:06:31.050483  375543 provision.go:87] duration metric: took 431.665526ms to configureAuth
	I1205 07:06:31.050515  375543 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:06:31.050734  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:31.050879  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.077241  375543 main.go:143] libmachine: Using SSH client type: native
	I1205 07:06:31.077607  375543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1205 07:06:31.077644  375543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1205 07:06:30.403214  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	W1205 07:06:32.403773  366710 pod_ready.go:104] pod "coredns-7d764666f9-bvbhf" is not "Ready", error: <nil>
	I1205 07:06:32.903916  366710 pod_ready.go:94] pod "coredns-7d764666f9-bvbhf" is "Ready"
	I1205 07:06:32.903942  366710 pod_ready.go:86] duration metric: took 34.00575162s for pod "coredns-7d764666f9-bvbhf" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.906601  366710 pod_ready.go:83] waiting for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.913301  366710 pod_ready.go:94] pod "etcd-no-preload-008839" is "Ready"
	I1205 07:06:32.913400  366710 pod_ready.go:86] duration metric: took 6.777304ms for pod "etcd-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.915636  366710 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.919542  366710 pod_ready.go:94] pod "kube-apiserver-no-preload-008839" is "Ready"
	I1205 07:06:32.919566  366710 pod_ready.go:86] duration metric: took 3.909248ms for pod "kube-apiserver-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:32.921563  366710 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.101533  366710 pod_ready.go:94] pod "kube-controller-manager-no-preload-008839" is "Ready"
	I1205 07:06:33.101569  366710 pod_ready.go:86] duration metric: took 179.984485ms for pod "kube-controller-manager-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.301800  366710 pod_ready.go:83] waiting for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:33.702088  366710 pod_ready.go:94] pod "kube-proxy-s9zn2" is "Ready"
	I1205 07:06:33.702116  366710 pod_ready.go:86] duration metric: took 400.29234ms for pod "kube-proxy-s9zn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:31.721865  375543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:06:31.721894  375543 machine.go:97] duration metric: took 4.603106939s to provisionDockerMachine
	I1205 07:06:31.721911  375543 start.go:293] postStartSetup for "embed-certs-770390" (driver="docker")
	I1205 07:06:31.721926  375543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:06:31.721985  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:06:31.722034  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.745060  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:31.850959  375543 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:06:31.854831  375543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:06:31.854862  375543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:06:31.854875  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:06:31.854930  375543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:06:31.855030  375543 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:06:31.855158  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:06:31.863927  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:31.883380  375543 start.go:296] duration metric: took 161.454914ms for postStartSetup
	I1205 07:06:31.883456  375543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:06:31.883520  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:31.906830  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.008279  375543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:06:32.013614  375543 fix.go:56] duration metric: took 5.233266702s for fixHost
	I1205 07:06:32.013639  375543 start.go:83] releasing machines lock for "embed-certs-770390", held for 5.233329197s
	I1205 07:06:32.013713  375543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-770390
	I1205 07:06:32.035130  375543 ssh_runner.go:195] Run: cat /version.json
	I1205 07:06:32.035191  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.035218  375543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:06:32.035305  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:32.059370  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.060657  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:32.825514  375543 ssh_runner.go:195] Run: systemctl --version
	I1205 07:06:32.832229  375543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:06:32.867423  375543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:06:32.872157  375543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:06:32.872230  375543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:06:32.880841  375543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:06:32.880864  375543 start.go:496] detecting cgroup driver to use...
	I1205 07:06:32.880892  375543 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:06:32.880945  375543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:06:32.897262  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:06:32.913628  375543 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:06:32.913679  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:06:32.931183  375543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:06:32.943212  375543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:06:33.031242  375543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:06:33.124377  375543 docker.go:234] disabling docker service ...
	I1205 07:06:33.124432  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:06:33.138291  375543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:06:33.150719  375543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:06:33.243720  375543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:06:33.334574  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:06:33.346746  375543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:06:33.360678  375543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:06:33.360741  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.369727  375543 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:06:33.369786  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.378916  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.387258  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.395950  375543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:06:33.405206  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.415134  375543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.425222  375543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:06:33.434369  375543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:06:33.442019  375543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:06:33.449717  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:33.543423  375543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:06:33.975505  375543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:06:33.975586  375543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:06:33.979949  375543 start.go:564] Will wait 60s for crictl version
	I1205 07:06:33.980033  375543 ssh_runner.go:195] Run: which crictl
	I1205 07:06:33.984307  375543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:06:34.008163  375543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:06:34.008225  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.036756  375543 ssh_runner.go:195] Run: crio --version
	I1205 07:06:34.070974  375543 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:06:33.902396  366710 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301736  366710 pod_ready.go:94] pod "kube-scheduler-no-preload-008839" is "Ready"
	I1205 07:06:34.301762  366710 pod_ready.go:86] duration metric: took 399.341028ms for pod "kube-scheduler-no-preload-008839" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:34.301777  366710 pod_ready.go:40] duration metric: took 35.406378156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:34.356972  366710 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:06:34.358967  366710 out.go:179] * Done! kubectl is now configured to use "no-preload-008839" cluster and "default" namespace by default
	I1205 07:06:34.071865  375543 cli_runner.go:164] Run: docker network inspect embed-certs-770390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:06:34.089273  375543 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:06:34.093527  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.104382  375543 kubeadm.go:884] updating cluster {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:06:34.104493  375543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:06:34.104533  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.135986  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.136005  375543 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:06:34.136046  375543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:06:34.163958  375543 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:06:34.163976  375543 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:06:34.163982  375543 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1205 07:06:34.164096  375543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-770390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:34.164159  375543 ssh_runner.go:195] Run: crio config
	I1205 07:06:34.210786  375543 cni.go:84] Creating CNI manager for ""
	I1205 07:06:34.210808  375543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:34.210819  375543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:06:34.210839  375543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-770390 NodeName:embed-certs-770390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:34.210959  375543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-770390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:34.211023  375543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:06:34.219056  375543 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:06:34.219118  375543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:34.227080  375543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1205 07:06:34.239752  375543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:06:34.251999  375543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1205 07:06:34.263865  375543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:34.267417  375543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:34.277134  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:34.394783  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:34.419292  375543 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390 for IP: 192.168.76.2
	I1205 07:06:34.419313  375543 certs.go:195] generating shared ca certs ...
	I1205 07:06:34.419352  375543 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:34.419526  375543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:34.419586  375543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:34.419598  375543 certs.go:257] generating profile certs ...
	I1205 07:06:34.419694  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/client.key
	I1205 07:06:34.419767  375543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key.46ffd30e
	I1205 07:06:34.419858  375543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key
	I1205 07:06:34.420010  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:34.420057  375543 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:34.420071  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:34.420110  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:34.420143  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:34.420172  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:34.420226  375543 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:34.421032  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:34.440844  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:34.465635  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:34.487656  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:34.511641  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 07:06:34.535311  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:34.552834  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:34.570691  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/embed-certs-770390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:06:34.588483  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:34.605748  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:34.624519  375543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:34.644092  375543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:34.657592  375543 ssh_runner.go:195] Run: openssl version
	I1205 07:06:34.663869  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.673595  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:34.683140  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688216  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.688277  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:34.738387  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:34.748071  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.757769  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:34.767020  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770922  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.770972  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:34.813377  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:34.823642  375543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.833453  375543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:34.841565  375543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846018  375543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.846067  375543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:34.881430  375543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:34.888928  375543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:34.892723  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:06:34.932540  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:06:34.979914  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:06:35.029643  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:06:35.084612  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:06:35.132242  375543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:06:35.171706  375543 kubeadm.go:401] StartCluster: {Name:embed-certs-770390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-770390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:35.171804  375543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:35.171880  375543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:35.202472  375543 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:06:35.202495  375543 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:06:35.202501  375543 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:06:35.202506  375543 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:06:35.202514  375543 cri.go:89] found id: ""
	I1205 07:06:35.202559  375543 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:06:35.214717  375543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:06:35Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:06:35.214778  375543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:35.223159  375543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:06:35.223177  375543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:06:35.223230  375543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:06:35.231356  375543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:06:35.232131  375543 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-770390" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.232612  375543 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-770390" cluster setting kubeconfig missing "embed-certs-770390" context setting]
	I1205 07:06:35.233423  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.235317  375543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:06:35.242634  375543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1205 07:06:35.242665  375543 kubeadm.go:602] duration metric: took 19.477371ms to restartPrimaryControlPlane
	I1205 07:06:35.242675  375543 kubeadm.go:403] duration metric: took 70.981616ms to StartCluster
	I1205 07:06:35.242690  375543 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.242761  375543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:35.244041  375543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:35.244259  375543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:35.244338  375543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:35.244434  375543 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-770390"
	I1205 07:06:35.244450  375543 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-770390"
	W1205 07:06:35.244462  375543 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:06:35.244471  375543 addons.go:70] Setting dashboard=true in profile "embed-certs-770390"
	I1205 07:06:35.244496  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244500  375543 addons.go:239] Setting addon dashboard=true in "embed-certs-770390"
	W1205 07:06:35.244519  375543 addons.go:248] addon dashboard should already be in state true
	I1205 07:06:35.244510  375543 addons.go:70] Setting default-storageclass=true in profile "embed-certs-770390"
	I1205 07:06:35.244540  375543 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-770390"
	I1205 07:06:35.244551  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.244494  375543 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:06:35.244825  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.244991  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.245043  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.247149  375543 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:35.248386  375543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:35.272894  375543 addons.go:239] Setting addon default-storageclass=true in "embed-certs-770390"
	W1205 07:06:35.272915  375543 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:06:35.272939  375543 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:06:35.273400  375543 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:06:35.275193  375543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:35.275251  375543 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:06:35.276704  375543 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.276758  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:35.276764  375543 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:06:33.056148  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:35.060453  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:31.366255  375309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:06:32.346995  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.184991035s)
	I1205 07:06:32.347021  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:06:32.347055  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347104  375309 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:06:32.347120  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:06:32.347138  375309 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:06:32.347061  375309 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.180871282s)
	I1205 07:06:32.347169  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:06:32.347188  375309 ssh_runner.go:195] Run: which crictl
	I1205 07:06:32.347192  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 07:06:33.570397  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.223258044s)
	I1205 07:06:33.570426  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:06:33.570455  375309 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570499  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:06:33.570511  375309 ssh_runner.go:235] Completed: which crictl: (1.223307009s)
	I1205 07:06:33.570561  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:34.893160  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.322638807s)
	I1205 07:06:34.893187  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:06:34.893208  375309 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893215  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.322634396s)
	I1205 07:06:34.893245  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:06:34.893276  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:35.276808  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.277808  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:06:35.277826  375543 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:06:35.277888  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.301215  375543 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.301315  375543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:35.301418  375543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:06:35.308857  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.320257  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.332128  375543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:06:35.426032  375543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:35.431462  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:06:35.431489  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:06:35.438950  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:35.447296  375543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:35.451227  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:35.451848  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:06:35.451886  375543 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:06:35.468647  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:06:35.468668  375543 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:06:35.498954  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:06:35.498976  375543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:06:35.545774  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:06:35.545808  375543 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:06:35.588544  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:06:35.588570  375543 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:06:35.610093  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:06:35.610117  375543 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:06:35.644554  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:06:35.644601  375543 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:06:35.667656  375543 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:35.667682  375543 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:06:35.688651  375543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:06:37.536634  375543 node_ready.go:49] node "embed-certs-770390" is "Ready"
	I1205 07:06:37.536671  375543 node_ready.go:38] duration metric: took 2.089351455s for node "embed-certs-770390" to be "Ready" ...
	I1205 07:06:37.536687  375543 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:37.536743  375543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:38.146255  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.707271235s)
	I1205 07:06:38.146314  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.695052574s)
	I1205 07:06:38.146429  375543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.457746781s)
	I1205 07:06:38.146472  375543 api_server.go:72] duration metric: took 2.902184723s to wait for apiserver process to appear ...
	I1205 07:06:38.146527  375543 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:38.146554  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.147993  375543 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-770390 addons enable metrics-server
	
	I1205 07:06:38.154740  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.154761  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:38.160172  375543 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1205 07:06:37.561481  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:40.055806  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:36.440601  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.547331042s)
	I1205 07:06:36.440633  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:06:36.440654  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440666  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.547364518s)
	I1205 07:06:36.440699  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:06:36.440737  375309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:06:38.061822  375309 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.621051807s)
	I1205 07:06:38.061871  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.621152631s)
	I1205 07:06:38.061900  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:06:38.061925  375309 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.061878  375309 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1205 07:06:38.061986  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:06:38.062043  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:38.066235  375309 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:06:38.066269  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:06:39.480656  375309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.418643669s)
	I1205 07:06:39.480686  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:06:39.480713  375309 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:39.480763  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:06:40.059650  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:06:40.059692  375309 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.059745  375309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1205 07:06:40.168218  375309 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:06:40.168260  375309 cache_images.go:125] Successfully loaded all cached images
	I1205 07:06:40.168267  375309 cache_images.go:94] duration metric: took 9.786277822s to LoadCachedImages
	I1205 07:06:40.168281  375309 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:06:40.168392  375309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:06:40.168461  375309 ssh_runner.go:195] Run: crio config
	I1205 07:06:40.215126  375309 cni.go:84] Creating CNI manager for ""
	I1205 07:06:40.215148  375309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:40.215165  375309 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:06:40.215185  375309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:06:40.215294  375309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:06:40.215371  375309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.223545  375309 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:06:40.223608  375309 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:06:40.231456  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:06:40.231452  375309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:06:40.231550  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:06:40.231600  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:06:40.231616  375309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:40.236450  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:06:40.236478  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:06:40.236508  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:06:40.236532  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:06:40.253269  375309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:06:40.289073  375309 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:06:40.289104  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:06:40.688980  375309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:06:40.696712  375309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:06:40.710980  375309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:06:40.726034  375309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:06:40.738766  375309 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:06:40.742492  375309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:06:40.752230  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:40.831660  375309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:40.858130  375309 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:06:40.858175  375309 certs.go:195] generating shared ca certs ...
	I1205 07:06:40.858196  375309 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.858496  375309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:06:40.858561  375309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:06:40.858573  375309 certs.go:257] generating profile certs ...
	I1205 07:06:40.858645  375309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:06:40.858659  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt with IP's: []
	I1205 07:06:40.893856  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt ...
	I1205 07:06:40.893898  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.crt: {Name:mk2b6195b99d5e275f660429f3814d5bdcd8191d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894105  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key ...
	I1205 07:06:40.894140  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key: {Name:mke407b69941bd64dfca0f6ab1c80bb1c45b93ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.894275  375309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:06:40.894306  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1205 07:06:40.941482  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada ...
	I1205 07:06:40.941507  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada: {Name:mk677ad869a55b9090eb744dc3feff29e8064497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941661  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada ...
	I1205 07:06:40.941680  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada: {Name:mkb7c70fb23c29d27bdcbb21d4add4953a296250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:40.941769  375309 certs.go:382] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt
	I1205 07:06:40.941862  375309 certs.go:386] copying /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada -> /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key
	I1205 07:06:40.941930  375309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:06:40.941945  375309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt with IP's: []
	I1205 07:06:41.076769  375309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt ...
	I1205 07:06:41.076794  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt: {Name:mke1ae4d7cafe67dff134743b1bfeb82268bc450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.076927  375309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key ...
	I1205 07:06:41.076940  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key: {Name:mk11a3d7395501747e70db233d7500d344284191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:41.077110  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:06:41.077146  375309 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:06:41.077156  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:06:41.077191  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:06:41.077216  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:06:41.077245  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:06:41.077285  375309 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:06:41.077869  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:06:41.097495  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:06:41.114088  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:06:41.131277  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:06:41.148175  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:06:41.168203  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:06:41.190211  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:06:38.161254  375543 addons.go:530] duration metric: took 2.916934723s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:06:38.647484  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:38.654056  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:06:38.654081  375543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:06:39.147586  375543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 07:06:39.152741  375543 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1205 07:06:39.153911  375543 api_server.go:141] control plane version: v1.34.2
	I1205 07:06:39.153938  375543 api_server.go:131] duration metric: took 1.007398463s to wait for apiserver health ...
	I1205 07:06:39.153949  375543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:39.158877  375543 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:39.158918  375543 system_pods.go:61] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.158931  375543 system_pods.go:61] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.158944  375543 system_pods.go:61] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.158959  375543 system_pods.go:61] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.158971  375543 system_pods.go:61] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.158984  375543 system_pods.go:61] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.158989  375543 system_pods.go:61] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.158999  375543 system_pods.go:61] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.159007  375543 system_pods.go:74] duration metric: took 5.050804ms to wait for pod list to return data ...
	I1205 07:06:39.159021  375543 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:39.161392  375543 default_sa.go:45] found service account: "default"
	I1205 07:06:39.161413  375543 default_sa.go:55] duration metric: took 2.38628ms for default service account to be created ...
	I1205 07:06:39.161420  375543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:06:39.163935  375543 system_pods.go:86] 8 kube-system pods found
	I1205 07:06:39.163966  375543 system_pods.go:89] "coredns-66bc5c9577-rg55r" [68bcad40-cb20-4ded-b15a-268ddb469470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:06:39.163978  375543 system_pods.go:89] "etcd-embed-certs-770390" [22f37425-6bf2-4bd1-ac8d-a7d8e1a66635] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:06:39.163992  375543 system_pods.go:89] "kindnet-dmpt2" [66c4a813-7f26-44e7-ab6f-be6422d710e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:39.164005  375543 system_pods.go:89] "kube-apiserver-embed-certs-770390" [77f4e205-d878-4cb2-9047-4e59db7afa54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:39.164016  375543 system_pods.go:89] "kube-controller-manager-embed-certs-770390" [ec537bde-1efe-493a-911e-43a74e613a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:39.164027  375543 system_pods.go:89] "kube-proxy-7bjnn" [6fa0fc44-e60d-4dd0-bcbe-cd17b7cafe44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:39.164038  375543 system_pods.go:89] "kube-scheduler-embed-certs-770390" [75177695-2b4c-4190-a054-eb007d9e3ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:39.164055  375543 system_pods.go:89] "storage-provisioner" [5c5ef936-ac84-44f0-8299-e431bcbbf8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:06:39.164067  375543 system_pods.go:126] duration metric: took 2.64117ms to wait for k8s-apps to be running ...
	I1205 07:06:39.164079  375543 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:06:39.164127  375543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:06:39.181008  375543 system_svc.go:56] duration metric: took 16.921756ms WaitForService to wait for kubelet
	I1205 07:06:39.181041  375543 kubeadm.go:587] duration metric: took 3.936753325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:06:39.181064  375543 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:39.184000  375543 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:39.184034  375543 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:39.184053  375543 node_conditions.go:105] duration metric: took 2.982688ms to run NodePressure ...
	I1205 07:06:39.184070  375543 start.go:242] waiting for startup goroutines ...
	I1205 07:06:39.184085  375543 start.go:247] waiting for cluster config update ...
	I1205 07:06:39.184102  375543 start.go:256] writing updated cluster config ...
	I1205 07:06:39.193568  375543 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:39.197314  375543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:39.200374  375543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:06:41.204973  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:41.212073  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:06:41.231583  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:06:41.253120  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:06:41.272824  375309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:06:41.292610  375309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:06:41.308462  375309 ssh_runner.go:195] Run: openssl version
	I1205 07:06:41.316714  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.325091  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:06:41.332343  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336139  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.336194  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:06:41.372232  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.379524  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:06:41.386631  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.393737  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:06:41.401581  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405466  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.405515  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:06:41.439825  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:06:41.447189  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:06:41.455927  375309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.463164  375309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:06:41.470435  375309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.473992  375309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.474034  375309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:06:41.515208  375309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:06:41.525475  375309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0
	I1205 07:06:41.535050  375309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:06:41.540368  375309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:06:41.540428  375309 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:06:41.540520  375309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:06:41.540579  375309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:06:41.574193  375309 cri.go:89] found id: ""
	I1205 07:06:41.574260  375309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:06:41.582447  375309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:06:41.590634  375309 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:06:41.590683  375309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:06:41.598032  375309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:06:41.598048  375309 kubeadm.go:158] found existing configuration files:
	
	I1205 07:06:41.598083  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:06:41.605848  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:06:41.605900  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:06:41.613213  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:06:41.620371  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:06:41.620417  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:06:41.627391  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.634542  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:06:41.634592  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:06:41.641338  375309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:06:41.648894  375309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:06:41.648944  375309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:06:41.656607  375309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:06:41.696598  375309 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:06:41.696706  375309 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:06:41.759716  375309 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:06:41.759824  375309 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1205 07:06:41.759883  375309 kubeadm.go:319] OS: Linux
	I1205 07:06:41.759954  375309 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:06:41.760020  375309 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:06:41.760091  375309 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:06:41.760146  375309 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:06:41.760192  375309 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:06:41.760252  375309 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:06:41.760365  375309 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:06:41.760434  375309 kubeadm.go:319] CGROUPS_IO: enabled
	I1205 07:06:41.814175  375309 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:06:41.814315  375309 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:06:41.814467  375309 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:06:41.827236  375309 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:06:41.830237  375309 out.go:252]   - Generating certificates and keys ...
	I1205 07:06:41.830391  375309 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:06:41.830478  375309 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:06:41.861271  375309 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:06:42.094457  375309 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:06:42.144264  375309 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:06:42.276913  375309 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:06:42.446846  375309 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:06:42.447034  375309 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.609304  375309 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:06:42.609696  375309 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-624263] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:06:42.767082  375309 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:06:43.048880  375309 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:06:43.119451  375309 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:06:43.119727  375309 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:06:43.389014  375309 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:06:43.643799  375309 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:06:43.853126  375309 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:06:44.168810  375309 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:06:44.219881  375309 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:06:44.220746  375309 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:06:44.227994  375309 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1205 07:06:42.556667  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	W1205 07:06:44.557029  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:44.229477  375309 out.go:252]   - Booting up control plane ...
	I1205 07:06:44.229641  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:06:44.229761  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:06:44.230667  375309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:06:44.249377  375309 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:06:44.249530  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:06:44.258992  375309 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:06:44.259591  375309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:06:44.259660  375309 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:06:44.400746  375309 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:06:44.400911  375309 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:06:45.401590  375309 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00117802s
	I1205 07:06:45.405602  375309 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:06:45.405744  375309 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1205 07:06:45.405949  375309 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:06:45.406099  375309 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1205 07:06:43.207479  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:45.732411  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:46.416593  375309 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.010733066s
	I1205 07:06:47.437314  375309 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.031843502s
	I1205 07:06:49.407519  375309 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00206161s
	I1205 07:06:49.424839  375309 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:06:49.434626  375309 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:06:49.444666  375309 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:06:49.444989  375309 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-624263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:06:49.453496  375309 kubeadm.go:319] [bootstrap-token] Using token: 6cz87l.2zljzwp80f64fvtx
	W1205 07:06:47.055999  369138 pod_ready.go:104] pod "coredns-66bc5c9577-lzlm8" is not "Ready", error: <nil>
	I1205 07:06:49.054841  369138 pod_ready.go:94] pod "coredns-66bc5c9577-lzlm8" is "Ready"
	I1205 07:06:49.054862  369138 pod_ready.go:86] duration metric: took 36.004755066s for pod "coredns-66bc5c9577-lzlm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.057541  369138 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.061520  369138 pod_ready.go:94] pod "etcd-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.061544  369138 pod_ready.go:86] duration metric: took 3.984636ms for pod "etcd-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.063582  369138 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.067353  369138 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.067370  369138 pod_ready.go:86] duration metric: took 3.767456ms for pod "kube-apiserver-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.069303  369138 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.254115  369138 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:49.254136  369138 pod_ready.go:86] duration metric: took 184.787953ms for pod "kube-controller-manager-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.461655  369138 pod_ready.go:83] waiting for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:49.857656  369138 pod_ready.go:94] pod "kube-proxy-fpss6" is "Ready"
	I1205 07:06:49.857685  369138 pod_ready.go:86] duration metric: took 396.007735ms for pod "kube-proxy-fpss6" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.055882  369138 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.453368  369138 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-172186" is "Ready"
	I1205 07:06:50.453396  369138 pod_ready.go:86] duration metric: took 397.4857ms for pod "kube-scheduler-default-k8s-diff-port-172186" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:06:50.453413  369138 pod_ready.go:40] duration metric: took 37.406615801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:06:50.507622  369138 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:06:50.544152  369138 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-172186" cluster and "default" namespace by default
	I1205 07:06:49.455401  375309 out.go:252]   - Configuring RBAC rules ...
	I1205 07:06:49.455557  375309 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 07:06:49.458435  375309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 07:06:49.468241  375309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 07:06:49.470871  375309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 07:06:49.474251  375309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 07:06:49.476826  375309 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 07:06:49.814698  375309 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 07:06:50.230601  375309 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 07:06:50.814879  375309 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 07:06:50.816527  375309 kubeadm.go:319] 
	I1205 07:06:50.816618  375309 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 07:06:50.816629  375309 kubeadm.go:319] 
	I1205 07:06:50.816731  375309 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 07:06:50.816746  375309 kubeadm.go:319] 
	I1205 07:06:50.816772  375309 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 07:06:50.816829  375309 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 07:06:50.816889  375309 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 07:06:50.816896  375309 kubeadm.go:319] 
	I1205 07:06:50.816961  375309 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 07:06:50.816968  375309 kubeadm.go:319] 
	I1205 07:06:50.817044  375309 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 07:06:50.817063  375309 kubeadm.go:319] 
	I1205 07:06:50.817143  375309 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 07:06:50.817267  375309 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 07:06:50.817394  375309 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 07:06:50.817409  375309 kubeadm.go:319] 
	I1205 07:06:50.817524  375309 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 07:06:50.817650  375309 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 07:06:50.817659  375309 kubeadm.go:319] 
	I1205 07:06:50.817779  375309 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6cz87l.2zljzwp80f64fvtx \
	I1205 07:06:50.817934  375309 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d \
	I1205 07:06:50.817986  375309 kubeadm.go:319] 	--control-plane 
	I1205 07:06:50.817995  375309 kubeadm.go:319] 
	I1205 07:06:50.818119  375309 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 07:06:50.818130  375309 kubeadm.go:319] 
	I1205 07:06:50.818264  375309 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6cz87l.2zljzwp80f64fvtx \
	I1205 07:06:50.818441  375309 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f21ef1fe4655ade9215ff0d25196a0f1ad174afc7024ad048086e40bbc0de65d 
	I1205 07:06:50.820666  375309 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1205 07:06:50.820805  375309 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:06:50.820836  375309 cni.go:84] Creating CNI manager for ""
	I1205 07:06:50.820842  375309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:06:50.822306  375309 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 07:06:50.823377  375309 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 07:06:50.827641  375309 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1205 07:06:50.827656  375309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 07:06:50.841908  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 07:06:51.087943  375309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 07:06:51.088016  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:51.088020  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-624263 minikube.k8s.io/updated_at=2025_12_05T07_06_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=newest-cni-624263 minikube.k8s.io/primary=true
	I1205 07:06:51.100314  375309 ops.go:34] apiserver oom_adj: -16
	I1205 07:06:51.182783  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1205 07:06:48.206272  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:50.707482  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:06:51.683501  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:52.183521  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:52.683875  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:53.182851  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:53.683772  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:54.183522  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:54.683707  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:55.182882  375309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:06:55.258882  375309 kubeadm.go:1114] duration metric: took 4.170928179s to wait for elevateKubeSystemPrivileges
	I1205 07:06:55.258924  375309 kubeadm.go:403] duration metric: took 13.718499957s to StartCluster
	I1205 07:06:55.258943  375309 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:55.259091  375309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:06:55.260779  375309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:06:55.260992  375309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:06:55.261015  375309 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:06:55.260988  375309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:06:55.261092  375309 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-624263"
	I1205 07:06:55.261103  375309 addons.go:70] Setting default-storageclass=true in profile "newest-cni-624263"
	I1205 07:06:55.261111  375309 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-624263"
	I1205 07:06:55.261136  375309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-624263"
	I1205 07:06:55.261205  375309 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:06:55.261141  375309 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:06:55.261533  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:55.261795  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:55.264544  375309 out.go:179] * Verifying Kubernetes components...
	I1205 07:06:55.265803  375309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:06:55.286310  375309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:06:55.286650  375309 addons.go:239] Setting addon default-storageclass=true in "newest-cni-624263"
	I1205 07:06:55.286691  375309 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:06:55.287146  375309 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:06:55.287448  375309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:55.287467  375309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:06:55.287514  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:55.319294  375309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:55.319347  375309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:06:55.319375  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:55.319434  375309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:06:55.344025  375309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:06:55.360212  375309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:06:55.416520  375309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:06:55.445025  375309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:06:55.461387  375309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:06:55.560069  375309 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1205 07:06:55.562031  375309 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:06:55.562087  375309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:06:55.761642  375309 api_server.go:72] duration metric: took 500.536203ms to wait for apiserver process to appear ...
	I1205 07:06:55.761666  375309 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:06:55.761688  375309 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:06:55.767240  375309 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:06:55.768170  375309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 07:06:55.768231  375309 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:06:55.768260  375309 api_server.go:131] duration metric: took 6.583544ms to wait for apiserver health ...
	I1205 07:06:55.768274  375309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:06:55.770070  375309 addons.go:530] duration metric: took 509.057154ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 07:06:55.773338  375309 system_pods.go:59] 8 kube-system pods found
	I1205 07:06:55.773374  375309 system_pods.go:61] "coredns-7d764666f9-jkmhj" [126785e3-c7a3-451f-ac72-e05d87bb32f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:06:55.773383  375309 system_pods.go:61] "etcd-newest-cni-624263" [9a4fe128-6030-4681-b201-a2a13ac29474] Running
	I1205 07:06:55.773398  375309 system_pods.go:61] "kindnet-fctwl" [29a59939-b66c-4796-9a9e-e1b442bccf1f] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:06:55.773405  375309 system_pods.go:61] "kube-apiserver-newest-cni-624263" [2fc9852f-c8d5-41c2-8dbe-41056e227d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:06:55.773413  375309 system_pods.go:61] "kube-controller-manager-newest-cni-624263" [957b864f-8ee5-40ce-9e1f-4396041c4525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:06:55.773419  375309 system_pods.go:61] "kube-proxy-8v5qr" [59595bdd-49dc-4491-b494-1c48474ea8c4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:06:55.773429  375309 system_pods.go:61] "kube-scheduler-newest-cni-624263" [a3c04907-1ac1-43af-827b-b4ab46dd553c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:06:55.773433  375309 system_pods.go:61] "storage-provisioner" [1cfc97af-739e-4ee9-838a-75962c29bc63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:06:55.773441  375309 system_pods.go:74] duration metric: took 5.158207ms to wait for pod list to return data ...
	I1205 07:06:55.773448  375309 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:06:55.775586  375309 default_sa.go:45] found service account: "default"
	I1205 07:06:55.775606  375309 default_sa.go:55] duration metric: took 2.152329ms for default service account to be created ...
	I1205 07:06:55.775617  375309 kubeadm.go:587] duration metric: took 514.514176ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:06:55.775636  375309 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:06:55.777703  375309 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:06:55.777727  375309 node_conditions.go:123] node cpu capacity is 8
	I1205 07:06:55.777746  375309 node_conditions.go:105] duration metric: took 2.104286ms to run NodePressure ...
	I1205 07:06:55.777760  375309 start.go:242] waiting for startup goroutines ...
	I1205 07:06:56.064119  375309 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-624263" context rescaled to 1 replicas
	I1205 07:06:56.064157  375309 start.go:247] waiting for cluster config update ...
	I1205 07:06:56.064168  375309 start.go:256] writing updated cluster config ...
	I1205 07:06:56.064460  375309 ssh_runner.go:195] Run: rm -f paused
	I1205 07:06:56.114430  375309 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:06:56.116214  375309 out.go:179] * Done! kubectl is now configured to use "newest-cni-624263" cluster and "default" namespace by default
	W1205 07:06:53.205893  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:55.206964  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.730620527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.7319563Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8939814-6144-4715-a4d2-61234dc4e9e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.733627978Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ba8195c7-e425-4591-ac01-fd9aaccd10d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.733734861Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e08046de-966d-4141-bc49-bd6986727035 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.73524276Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.736906254Z" level=info msg="Ran pod sandbox 4ab6aece951fc41cba7df3eb4195ed1e3a78ba4b44242175c243efb12a41aa3f with infra container: kube-system/kindnet-fctwl/POD" id=ba8195c7-e425-4591-ac01-fd9aaccd10d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.73761929Z" level=info msg="Creating container: kube-system/kube-proxy-8v5qr/kube-proxy" id=92d5917f-1765-4513-946f-a6cd2667324a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.737728746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.738074709Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8b70e10a-84c5-4495-b8fb-e77f0b216524 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.73817743Z" level=info msg="Image docker.io/kindest/kindnetd:v20250512-df8de77b not found" id=8b70e10a-84c5-4495-b8fb-e77f0b216524 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.738234867Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20250512-df8de77b found" id=8b70e10a-84c5-4495-b8fb-e77f0b216524 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.739290154Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250512-df8de77b" id=461e11a8-0ea9-4ef2-bb74-bfc3b21a1f3e name=/runtime.v1.ImageService/PullImage
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.741206242Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250512-df8de77b\""
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.744199339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.745541786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.776471892Z" level=info msg="Created container 53ba610c1e2567cc477aad55c1e2d934770bf37e6d0357df10425cf5c569d6f2: kube-system/kube-proxy-8v5qr/kube-proxy" id=92d5917f-1765-4513-946f-a6cd2667324a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.77721877Z" level=info msg="Starting container: 53ba610c1e2567cc477aad55c1e2d934770bf37e6d0357df10425cf5c569d6f2" id=3a490f2c-e943-4d1d-ae39-0a7ba695846e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:55 newest-cni-624263 crio[775]: time="2025-12-05T07:06:55.780152358Z" level=info msg="Started container" PID=2542 containerID=53ba610c1e2567cc477aad55c1e2d934770bf37e6d0357df10425cf5c569d6f2 description=kube-system/kube-proxy-8v5qr/kube-proxy id=3a490f2c-e943-4d1d-ae39-0a7ba695846e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a79a65126c5cd3645246150f01c0b894608125eb3a7e3e7665719c5313b97230
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.448361766Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11" id=461e11a8-0ea9-4ef2-bb74-bfc3b21a1f3e name=/runtime.v1.ImageService/PullImage
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.449112176Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b96ccbeb-e778-4003-afb8-2af1c04972ec name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.451526811Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=251013c3-763e-47d5-a30a-850776db2bff name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.45458836Z" level=info msg="Creating container: kube-system/kindnet-fctwl/kindnet-cni" id=90c78240-20f8-4747-a81a-38beaac22eef name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.454699109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.458205861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:57 newest-cni-624263 crio[775]: time="2025-12-05T07:06:57.458709587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0b51fd7e239b3       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11   Less than a second ago   Running             kindnet-cni               0                   4ab6aece951fc       kindnet-fctwl                               kube-system
	53ba610c1e256       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                     1 second ago             Running             kube-proxy                0                   a79a65126c5cd       kube-proxy-8v5qr                            kube-system
	1c55600a48d57       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                     11 seconds ago           Running             kube-apiserver            0                   69ada1b51fbf4       kube-apiserver-newest-cni-624263            kube-system
	8090ee7031585       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                     11 seconds ago           Running             kube-controller-manager   0                   7c50b2350cf99       kube-controller-manager-newest-cni-624263   kube-system
	ae5753e0ff693       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     11 seconds ago           Running             etcd                      0                   1aff151ac9e57       etcd-newest-cni-624263                      kube-system
	529ab6bdf8dbf       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                     12 seconds ago           Running             kube-scheduler            0                   a2fee0fa1e642       kube-scheduler-newest-cni-624263            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-624263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-624263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=newest-cni-624263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_06_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:06:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-624263
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:06:50 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:06:50 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:06:50 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 05 Dec 2025 07:06:50 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-624263
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                74ead395-c6a4-4eb4-a8b4-1e768c64ff0f
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-624263                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-fctwl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-624263             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-624263    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-8v5qr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-624263             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-624263 event: Registered Node newest-cni-624263 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [ae5753e0ff693aa59b1591fde920c4ce9891c0c293ca32652cdc2995f2270e46] <==
	{"level":"warn","ts":"2025-12-05T07:06:46.573915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.589012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.596019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.604609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.612899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.621422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.629637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.637884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.647114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.655508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.663792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.673044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.681474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.689204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.699975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.715708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.723615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.731777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.739908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.748799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.764123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.771027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.778271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.785870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:46.839568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:06:57 up  1:49,  0 user,  load average: 3.30, 3.26, 2.25
	Linux newest-cni-624263 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [1c55600a48d575b65008a16ef049c34811ef1d5c76ccdf283604f98250a77f19] <==
	I1205 07:06:47.484434       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 07:06:47.484446       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:06:47.485079       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:06:47.487540       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1205 07:06:47.487616       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:47.491366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:47.677789       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:06:48.361576       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1205 07:06:48.366103       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1205 07:06:48.366119       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:06:48.874452       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:06:48.910712       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:06:48.964515       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 07:06:48.970583       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1205 07:06:48.971692       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:48.975723       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:06:49.388197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:06:50.220221       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:06:50.229806       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 07:06:50.236908       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 07:06:55.140517       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:06:55.294629       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:55.301419       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:55.391369       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1205 07:06:55.391369       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8090ee7031585129be5128e1b3f688eedbb2898f0f01efca02f9b6793ecd0f30] <==
	I1205 07:06:54.192380       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192415       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192470       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192489       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192545       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192900       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.192083       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193194       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193237       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193302       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193393       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193443       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193632       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193634       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.193922       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1205 07:06:54.194109       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-624263"
	I1205 07:06:54.194180       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1205 07:06:54.194220       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.199289       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-624263" podCIDRs=["10.42.0.0/24"]
	I1205 07:06:54.204945       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.209792       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:54.292457       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:54.292474       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:06:54.292480       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:06:54.310795       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [53ba610c1e2567cc477aad55c1e2d934770bf37e6d0357df10425cf5c569d6f2] <==
	I1205 07:06:55.817411       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:06:55.884800       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:06:55.984984       1 shared_informer.go:377] "Caches are synced"
	I1205 07:06:55.985028       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1205 07:06:55.985126       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:06:56.004087       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:06:56.004134       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:06:56.009207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:06:56.009627       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:06:56.009661       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:56.010730       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:06:56.010756       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:06:56.010763       1 config.go:200] "Starting service config controller"
	I1205 07:06:56.010782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:06:56.010791       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:06:56.010796       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:06:56.010824       1 config.go:309] "Starting node config controller"
	I1205 07:06:56.010833       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:06:56.010842       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:06:56.111451       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:06:56.111471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:06:56.111506       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [529ab6bdf8dbf6bff2d83be24e369c6a33ebda0ae81ed759c9a4994daa1dbc70] <==
	E1205 07:06:47.441165       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1205 07:06:47.441629       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1205 07:06:47.441922       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1205 07:06:47.442102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1205 07:06:48.313410       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1205 07:06:48.313448       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:06:48.314304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1205 07:06:48.314385       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1205 07:06:48.352898       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:06:48.353865       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1205 07:06:48.358847       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1205 07:06:48.359706       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1205 07:06:48.361645       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1205 07:06:48.362562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1205 07:06:48.500589       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1205 07:06:48.501746       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1205 07:06:48.548343       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1205 07:06:48.549411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1205 07:06:48.565637       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1205 07:06:48.566700       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1205 07:06:48.712754       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1205 07:06:48.718658       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1205 07:06:48.730954       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1205 07:06:48.731918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1205 07:06:50.931464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: E1205 07:06:51.106900    2258 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-624263\" already exists" pod="kube-system/kube-controller-manager-newest-cni-624263"
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: E1205 07:06:51.106971    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-624263" containerName="kube-controller-manager"
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: I1205 07:06:51.133171    2258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-624263" podStartSLOduration=1.133152104 podStartE2EDuration="1.133152104s" podCreationTimestamp="2025-12-05 07:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:51.121971228 +0000 UTC m=+1.150718751" watchObservedRunningTime="2025-12-05 07:06:51.133152104 +0000 UTC m=+1.161899627"
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: I1205 07:06:51.133351    2258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-624263" podStartSLOduration=1.133308987 podStartE2EDuration="1.133308987s" podCreationTimestamp="2025-12-05 07:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:51.132908392 +0000 UTC m=+1.161655912" watchObservedRunningTime="2025-12-05 07:06:51.133308987 +0000 UTC m=+1.162056511"
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: I1205 07:06:51.155699    2258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-624263" podStartSLOduration=1.155679549 podStartE2EDuration="1.155679549s" podCreationTimestamp="2025-12-05 07:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:51.146076779 +0000 UTC m=+1.174824302" watchObservedRunningTime="2025-12-05 07:06:51.155679549 +0000 UTC m=+1.184427073"
	Dec 05 07:06:51 newest-cni-624263 kubelet[2258]: I1205 07:06:51.155865    2258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-624263" podStartSLOduration=1.155857703 podStartE2EDuration="1.155857703s" podCreationTimestamp="2025-12-05 07:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:51.155111748 +0000 UTC m=+1.183859272" watchObservedRunningTime="2025-12-05 07:06:51.155857703 +0000 UTC m=+1.184605228"
	Dec 05 07:06:52 newest-cni-624263 kubelet[2258]: E1205 07:06:52.093637    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-624263" containerName="etcd"
	Dec 05 07:06:52 newest-cni-624263 kubelet[2258]: E1205 07:06:52.093748    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-624263" containerName="kube-controller-manager"
	Dec 05 07:06:52 newest-cni-624263 kubelet[2258]: E1205 07:06:52.093834    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-624263" containerName="kube-apiserver"
	Dec 05 07:06:52 newest-cni-624263 kubelet[2258]: E1205 07:06:52.094001    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-624263" containerName="kube-scheduler"
	Dec 05 07:06:53 newest-cni-624263 kubelet[2258]: E1205 07:06:53.095612    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-624263" containerName="kube-scheduler"
	Dec 05 07:06:53 newest-cni-624263 kubelet[2258]: E1205 07:06:53.095730    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-624263" containerName="kube-apiserver"
	Dec 05 07:06:54 newest-cni-624263 kubelet[2258]: E1205 07:06:54.102431    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-624263" containerName="kube-scheduler"
	Dec 05 07:06:54 newest-cni-624263 kubelet[2258]: I1205 07:06:54.289010    2258 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 05 07:06:54 newest-cni-624263 kubelet[2258]: I1205 07:06:54.289699    2258 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: E1205 07:06:55.358990    2258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-624263" containerName="kube-controller-manager"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474680    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59595bdd-49dc-4491-b494-1c48474ea8c4-kube-proxy\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474725    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-lib-modules\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474763    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vxmp\" (UniqueName: \"kubernetes.io/projected/59595bdd-49dc-4491-b494-1c48474ea8c4-kube-api-access-4vxmp\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474789    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-lib-modules\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474815    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdw97\" (UniqueName: \"kubernetes.io/projected/29a59939-b66c-4796-9a9e-e1b442bccf1f-kube-api-access-xdw97\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474835    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-xtables-lock\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474857    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-xtables-lock\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:06:55 newest-cni-624263 kubelet[2258]: I1205 07:06:55.474877    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-cni-cfg\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:06:56 newest-cni-624263 kubelet[2258]: I1205 07:06:56.118904    2258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8v5qr" podStartSLOduration=1.118885121 podStartE2EDuration="1.118885121s" podCreationTimestamp="2025-12-05 07:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:56.11880531 +0000 UTC m=+6.147552834" watchObservedRunningTime="2025-12-05 07:06:56.118885121 +0000 UTC m=+6.147632644"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-624263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jkmhj storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner: exit status 1 (64.666247ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jkmhj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-172186 --alsologtostderr -v=1
E1205 07:07:03.085669   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.092020   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.103340   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.124783   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.166298   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.247872   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.409752   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:03.731618   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:04.373298   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-172186 --alsologtostderr -v=1: exit status 80 (2.20575594s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-172186 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:07:02.320862  385607 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:02.320983  385607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:02.320993  385607 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:02.320999  385607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:02.321199  385607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:02.321425  385607 out.go:368] Setting JSON to false
	I1205 07:07:02.321442  385607 mustload.go:66] Loading cluster: default-k8s-diff-port-172186
	I1205 07:07:02.321770  385607 config.go:182] Loaded profile config "default-k8s-diff-port-172186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:07:02.322142  385607 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172186 --format={{.State.Status}}
	I1205 07:07:02.339436  385607 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:07:02.339657  385607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:02.394764  385607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-05 07:07:02.385906549 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:02.395587  385607 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-172186 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:07:02.397391  385607 out.go:179] * Pausing node default-k8s-diff-port-172186 ... 
	I1205 07:07:02.398614  385607 host.go:66] Checking if "default-k8s-diff-port-172186" exists ...
	I1205 07:07:02.398854  385607 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:02.398892  385607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172186
	I1205 07:07:02.415605  385607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/default-k8s-diff-port-172186/id_rsa Username:docker}
	I1205 07:07:02.512523  385607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:02.541480  385607 pause.go:52] kubelet running: true
	I1205 07:07:02.541570  385607 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:02.710761  385607 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:02.710854  385607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:02.772867  385607 cri.go:89] found id: "0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4"
	I1205 07:07:02.772893  385607 cri.go:89] found id: "8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574"
	I1205 07:07:02.772898  385607 cri.go:89] found id: "2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d"
	I1205 07:07:02.772901  385607 cri.go:89] found id: "3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	I1205 07:07:02.772903  385607 cri.go:89] found id: "345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f"
	I1205 07:07:02.772908  385607 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:07:02.772911  385607 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:07:02.772913  385607 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:07:02.772916  385607 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:07:02.772922  385607 cri.go:89] found id: "5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	I1205 07:07:02.772944  385607 cri.go:89] found id: "63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484"
	I1205 07:07:02.772958  385607 cri.go:89] found id: ""
	I1205 07:07:02.772999  385607 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:02.784034  385607 retry.go:31] will retry after 337.566347ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:02Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:03.122555  385607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:03.135051  385607 pause.go:52] kubelet running: false
	I1205 07:07:03.135102  385607 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:03.274783  385607 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:03.274854  385607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:03.336530  385607 cri.go:89] found id: "0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4"
	I1205 07:07:03.336549  385607 cri.go:89] found id: "8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574"
	I1205 07:07:03.336553  385607 cri.go:89] found id: "2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d"
	I1205 07:07:03.336556  385607 cri.go:89] found id: "3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	I1205 07:07:03.336560  385607 cri.go:89] found id: "345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f"
	I1205 07:07:03.336565  385607 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:07:03.336569  385607 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:07:03.336573  385607 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:07:03.336578  385607 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:07:03.336586  385607 cri.go:89] found id: "5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	I1205 07:07:03.336591  385607 cri.go:89] found id: "63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484"
	I1205 07:07:03.336596  385607 cri.go:89] found id: ""
	I1205 07:07:03.336633  385607 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:03.347465  385607 retry.go:31] will retry after 212.022464ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:03Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:03.559916  385607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:03.572191  385607 pause.go:52] kubelet running: false
	I1205 07:07:03.572249  385607 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:03.706275  385607 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:03.706371  385607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:03.768823  385607 cri.go:89] found id: "0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4"
	I1205 07:07:03.768845  385607 cri.go:89] found id: "8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574"
	I1205 07:07:03.768851  385607 cri.go:89] found id: "2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d"
	I1205 07:07:03.768856  385607 cri.go:89] found id: "3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	I1205 07:07:03.768860  385607 cri.go:89] found id: "345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f"
	I1205 07:07:03.768866  385607 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:07:03.768870  385607 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:07:03.768874  385607 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:07:03.768878  385607 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:07:03.768886  385607 cri.go:89] found id: "5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	I1205 07:07:03.768891  385607 cri.go:89] found id: "63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484"
	I1205 07:07:03.768899  385607 cri.go:89] found id: ""
	I1205 07:07:03.768945  385607 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:03.779689  385607 retry.go:31] will retry after 460.181271ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:03Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:04.240303  385607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:04.252926  385607 pause.go:52] kubelet running: false
	I1205 07:07:04.252984  385607 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:04.389788  385607 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:04.389860  385607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:04.451275  385607 cri.go:89] found id: "0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4"
	I1205 07:07:04.451301  385607 cri.go:89] found id: "8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574"
	I1205 07:07:04.451308  385607 cri.go:89] found id: "2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d"
	I1205 07:07:04.451313  385607 cri.go:89] found id: "3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	I1205 07:07:04.451317  385607 cri.go:89] found id: "345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f"
	I1205 07:07:04.451334  385607 cri.go:89] found id: "ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141"
	I1205 07:07:04.451340  385607 cri.go:89] found id: "b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f"
	I1205 07:07:04.451345  385607 cri.go:89] found id: "b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e"
	I1205 07:07:04.451350  385607 cri.go:89] found id: "d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957"
	I1205 07:07:04.451358  385607 cri.go:89] found id: "5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	I1205 07:07:04.451363  385607 cri.go:89] found id: "63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484"
	I1205 07:07:04.451368  385607 cri.go:89] found id: ""
	I1205 07:07:04.451410  385607 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:04.464691  385607 out.go:203] 
	W1205 07:07:04.465939  385607 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:07:04.465956  385607 out.go:285] * 
	* 
	W1205 07:07:04.470054  385607 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:07:04.471366  385607 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-172186 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-172186
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-172186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	        "Created": "2025-12-05T07:04:58.706172169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369344,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:06:01.394443264Z",
	            "FinishedAt": "2025-12-05T07:06:00.526089316Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hosts",
	        "LogPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c-json.log",
	        "Name": "/default-k8s-diff-port-172186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-172186:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-172186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	                "LowerDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172186",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d930d575028e8e3b0c5b4d828a070dba7ba3c3f3d5127cdc220c8e4afc32b3a4",
	            "SandboxKey": "/var/run/docker/netns/d930d575028e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-172186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7252f408ef750a913b6fabe10d1ab3c2a2b877d7652581ebca03873c25ab3784",
	                    "EndpointID": "b26e4c9526c5476b08f9535e30117e51b87b69bd4ef2348d834c904fea7f5514",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:25:b6:17:88:ba",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-172186",
	                        "b4ba7170def8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186: exit status 2 (319.570544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25
E1205 07:07:05.654840   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25: (1.024446558s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.643846313Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.985522972Z" level=info msg="Removing container: fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4" id=345829f2-22ae-4b36-8e2c-d0161a5076ac name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.995627716Z" level=info msg="Removed container fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=345829f2-22ae-4b36-8e2c-d0161a5076ac name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.927550206Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33bb909d-ad9f-45a0-a15a-a1b31f48c36b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.928415813Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c10b6111-3906-4d46-b1a0-d4c31e7b0c08 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.92934984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=b50ae9e6-dd2d-44b4-a40c-859696b4e300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.929468962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.936300077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.936982535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.974486779Z" level=info msg="Created container 5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=b50ae9e6-dd2d-44b4-a40c-859696b4e300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.975052966Z" level=info msg="Starting container: 5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842" id=5b25b25e-5b8c-439e-babd-3214e2d9f6cc name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.976965218Z" level=info msg="Started container" PID=1769 containerID=5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper id=5b25b25e-5b8c-439e-babd-3214e2d9f6cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7277646d740559e591f2f9afb3df4c057078e47c3f692ad83b4ebf699073e6d9
	Dec 05 07:06:41 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:41.037281639Z" level=info msg="Removing container: 4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c" id=47f3d5c6-5c11-4d55-a86c-40cffedd87fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:41 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:41.047838971Z" level=info msg="Removed container 4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=47f3d5c6-5c11-4d55-a86c-40cffedd87fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.045799421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6f9aff2-c3cb-47b9-81cf-a003b7103da1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.046908107Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc20722a-2b10-43d5-ae64-1723a62c7652 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.048007276Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3f5017a3-1aea-4b35-a7ab-f455e5a9c13e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.048137171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.052979963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053169731Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e47799f33d24a1f67784834f5f7cffe87343e233b9cf3bb1fadccfc5dae213fd/merged/etc/passwd: no such file or directory"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053199038Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e47799f33d24a1f67784834f5f7cffe87343e233b9cf3bb1fadccfc5dae213fd/merged/etc/group: no such file or directory"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053475347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.083399094Z" level=info msg="Created container 0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4: kube-system/storage-provisioner/storage-provisioner" id=3f5017a3-1aea-4b35-a7ab-f455e5a9c13e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.084023095Z" level=info msg="Starting container: 0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4" id=c6dd6cc5-be50-40aa-bf54-e1cc409a8f25 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.086389869Z" level=info msg="Started container" PID=1783 containerID=0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4 description=kube-system/storage-provisioner/storage-provisioner id=c6dd6cc5-be50-40aa-bf54-e1cc409a8f25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c373a7598bfe88c0ea0ac97a0d235e6d75b0a7080d30d5e916cd69faf92becc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0db2b232951f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   8c373a7598bfe       storage-provisioner                                    kube-system
	5a549ea68f1d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   7277646d74055       dashboard-metrics-scraper-6ffb444bf9-q4f9j             kubernetes-dashboard
	63ca6b04e977b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   5c6e3bb140e4f       kubernetes-dashboard-855c9754f9-2clpl                  kubernetes-dashboard
	a0e35c9119209       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   fa2c43f7954a9       busybox                                                default
	8ca6589a660b2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   760c97ab10851       coredns-66bc5c9577-lzlm8                               kube-system
	2b4ad487f94d0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   621f82a1bca8a       kindnet-w2mzg                                          kube-system
	3d055b1cda12d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   8c373a7598bfe       storage-provisioner                                    kube-system
	345ebcc959b75       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   ab236b05b6f1a       kube-proxy-fpss6                                       kube-system
	ed8de5e69d481       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   93ad286200d7f       kube-apiserver-default-k8s-diff-port-172186            kube-system
	b8424f7771088       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   b5de8684cef9f       etcd-default-k8s-diff-port-172186                      kube-system
	b75fc581167e9       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   571ee78b4c136       kube-scheduler-default-k8s-diff-port-172186            kube-system
	d42f7b44a3dec       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   4ebedf91780ac       kube-controller-manager-default-k8s-diff-port-172186   kube-system
	
	
	==> coredns [8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45369 - 10480 "HINFO IN 7472493519933402814.3980104550805037176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.880662929s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-172186
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-172186
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=default-k8s-diff-port-172186
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-172186
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-172186
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                0c6d18bf-2e40-435b-9be8-d014e737e08c
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-lzlm8                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-172186                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-w2mzg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-172186             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-172186    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-fpss6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-172186             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q4f9j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2clpl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-172186 event: Registered Node default-k8s-diff-port-172186 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-172186 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-172186 event: Registered Node default-k8s-diff-port-172186 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f] <==
	{"level":"warn","ts":"2025-12-05T07:06:10.923170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.931749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.938851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.945946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.952262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.958454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.964814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.973627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.983248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.993413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.999302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.005404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.011410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.017511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.023924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.030232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.036294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.042495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.049171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.055595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.061950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.080528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.087114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.093926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.144912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:05 up  1:49,  0 user,  load average: 3.97, 3.40, 2.30
	Linux default-k8s-diff-port-172186 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d] <==
	I1205 07:06:12.424672       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:06:12.424939       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1205 07:06:12.425080       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:06:12.425094       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:06:12.425114       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:06:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:06:12.625580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:06:12.625615       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:06:12.625627       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:06:12.716886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:06:12.997369       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:06:12.997405       1 metrics.go:72] Registering metrics
	I1205 07:06:12.997537       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:22.625956       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:22.626062       1 main.go:301] handling current node
	I1205 07:06:32.627426       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:32.627472       1 main.go:301] handling current node
	I1205 07:06:42.626046       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:42.626078       1 main.go:301] handling current node
	I1205 07:06:52.628410       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:52.628446       1 main.go:301] handling current node
	I1205 07:07:02.634400       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:07:02.634430       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141] <==
	I1205 07:06:11.586525       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:06:11.594015       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:06:11.594083       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:06:11.597643       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 07:06:11.597726       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 07:06:11.597775       1 aggregator.go:171] initial CRD sync complete...
	I1205 07:06:11.597783       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:06:11.597789       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:06:11.597794       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:06:11.607659       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1205 07:06:11.607683       1 policy_source.go:240] refreshing policies
	I1205 07:06:11.627309       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:06:11.805453       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:06:11.829456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:06:11.846536       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:06:11.854243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:06:11.859929       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:06:11.888664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.97.4"}
	I1205 07:06:11.897651       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.213.64"}
	I1205 07:06:12.489756       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:06:15.024171       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:15.024219       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:15.173065       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:06:15.173065       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:06:15.326073       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957] <==
	I1205 07:06:14.884110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:14.890354       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:06:14.920254       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:06:14.920276       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:06:14.920284       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:06:14.920358       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:06:14.920368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:06:14.920382       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:06:14.920382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:06:14.920391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:06:14.920426       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 07:06:14.920431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:06:14.922426       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:06:14.924158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:06:14.924221       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:06:14.924305       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-172186"
	I1205 07:06:14.924406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:06:14.925162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:14.926343       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 07:06:14.928508       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:06:14.930642       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 07:06:14.931867       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 07:06:14.934201       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 07:06:14.938456       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:06:14.946751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f] <==
	I1205 07:06:12.307130       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:06:12.374916       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:06:12.475174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:06:12.475221       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1205 07:06:12.475287       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:06:12.495383       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:06:12.495447       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:06:12.500813       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:06:12.501262       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:06:12.501301       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:12.504157       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:06:12.504237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:06:12.504292       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:06:12.504308       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:06:12.504369       1 config.go:200] "Starting service config controller"
	I1205 07:06:12.504376       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:06:12.504443       1 config.go:309] "Starting node config controller"
	I1205 07:06:12.504451       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:06:12.504458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:06:12.604872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:06:12.604928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:06:12.604935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e] <==
	I1205 07:06:10.061022       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:06:11.501624       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:06:11.501654       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:06:11.501666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:06:11.501676       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:06:11.543082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:06:11.543177       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:11.553912       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:06:11.553999       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:11.555345       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:11.554020       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:06:11.656060       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:06:15 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:15.640002     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxrb\" (UniqueName: \"kubernetes.io/projected/9c2c5d24-0653-4c1d-a7c2-b211e66230ec-kube-api-access-bcxrb\") pod \"dashboard-metrics-scraper-6ffb444bf9-q4f9j\" (UID: \"9c2c5d24-0653-4c1d-a7c2-b211e66230ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j"
	Dec 05 07:06:15 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:15.640036     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9c2c5d24-0653-4c1d-a7c2-b211e66230ec-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-q4f9j\" (UID: \"9c2c5d24-0653-4c1d-a7c2-b211e66230ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j"
	Dec 05 07:06:18 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:18.982235     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 05 07:06:19 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:19.988494     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2clpl" podStartSLOduration=1.795296081 podStartE2EDuration="4.988471079s" podCreationTimestamp="2025-12-05 07:06:15 +0000 UTC" firstStartedPulling="2025-12-05 07:06:15.871035734 +0000 UTC m=+7.031415627" lastFinishedPulling="2025-12-05 07:06:19.064210702 +0000 UTC m=+10.224590625" observedRunningTime="2025-12-05 07:06:19.98792839 +0000 UTC m=+11.148308292" watchObservedRunningTime="2025-12-05 07:06:19.988471079 +0000 UTC m=+11.148850981"
	Dec 05 07:06:21 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:21.979104     733 scope.go:117] "RemoveContainer" containerID="fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:22.984075     733 scope.go:117] "RemoveContainer" containerID="fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:22.984187     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:22.984399     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:23 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:23.987742     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:23 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:23.987912     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:27 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:27.716594     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:27 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:27.716762     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:40 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:40.926997     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:41.036116     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:41.036333     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:41.036542     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:43 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:43.044816     733 scope.go:117] "RemoveContainer" containerID="3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	Dec 05 07:06:47 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:47.717154     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:47 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:47.717383     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:59 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:59.927274     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:59 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:59.927523     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: kubelet.service: Consumed 1.629s CPU time.
	
	
	==> kubernetes-dashboard [63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484] <==
	2025/12/05 07:06:19 Starting overwatch
	2025/12/05 07:06:19 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:19 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:19 Using secret token for csrf signing
	2025/12/05 07:06:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:19 Successful initial request to the apiserver, version: v1.34.2
	2025/12/05 07:06:19 Generating JWE encryption key
	2025/12/05 07:06:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:19 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:19 Creating in-cluster Sidecar client
	2025/12/05 07:06:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:19 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4] <==
	I1205 07:06:43.101120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:06:43.108982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:06:43.109023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:06:43.110991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.566581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:50.826682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:54.425518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:57.479849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.502495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.506824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:00.506996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:07:00.507160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171!
	I1205 07:07:00.507142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73acd703-8958-4c9b-a71e-6ab66433bd8b", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171 became leader
	W1205 07:07:00.509227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.512521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:00.607438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171!
	W1205 07:07:02.515872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:02.519818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:04.523219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:04.528424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641] <==
	I1205 07:06:12.280671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:06:42.282932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186: exit status 2 (327.370927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-172186
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-172186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	        "Created": "2025-12-05T07:04:58.706172169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369344,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:06:01.394443264Z",
	            "FinishedAt": "2025-12-05T07:06:00.526089316Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/hosts",
	        "LogPath": "/var/lib/docker/containers/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c/b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c-json.log",
	        "Name": "/default-k8s-diff-port-172186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-172186:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-172186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b4ba7170def8ab534781e3dd304a8637718c12338739d4e1050d3b5880890e2c",
	                "LowerDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c96eaf9eb419ebef99811f6322c1b275b245ec6aed2f5aab10dfa2ad8ce92069/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172186",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d930d575028e8e3b0c5b4d828a070dba7ba3c3f3d5127cdc220c8e4afc32b3a4",
	            "SandboxKey": "/var/run/docker/netns/d930d575028e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-172186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7252f408ef750a913b6fabe10d1ab3c2a2b877d7652581ebca03873c25ab3784",
	                    "EndpointID": "b26e4c9526c5476b08f9535e30117e51b87b69bd4ef2348d834c904fea7f5514",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:25:b6:17:88:ba",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-172186",
	                        "b4ba7170def8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186: exit status 2 (323.590651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-172186 logs -n 25: (1.079332892s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-172186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-172186 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:07:01.656485  384982 out.go:252] * Restarting existing docker container for "newest-cni-624263" ...
	I1205 07:07:01.656540  384982 cli_runner.go:164] Run: docker start newest-cni-624263
	I1205 07:07:01.895199  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.914785  384982 kic.go:430] container "newest-cni-624263" state is running.
	I1205 07:07:01.915225  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:01.934239  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.934479  384982 machine.go:94] provisionDockerMachine start ...
	I1205 07:07:01.934568  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:01.952380  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:01.952665  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:01.952679  384982 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:07:01.953292  384982 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55518->127.0.0.1:33138: read: connection reset by peer
	I1205 07:07:05.092419  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.092445  384982 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:07:05.092491  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.112429  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.112718  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.112739  384982 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:07:05.265486  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.265582  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.285453  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.285689  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.285716  384982 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:07:05.425411  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:07:05.425436  384982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:07:05.425464  384982 ubuntu.go:190] setting up certificates
	I1205 07:07:05.425475  384982 provision.go:84] configureAuth start
	I1205 07:07:05.425532  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:05.443549  384982 provision.go:143] copyHostCerts
	I1205 07:07:05.443614  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:07:05.443629  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:07:05.443700  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:07:05.443800  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:07:05.443816  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:07:05.443845  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:07:05.443904  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:07:05.443915  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:07:05.443950  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:07:05.444023  384982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:07:05.672635  384982 provision.go:177] copyRemoteCerts
	I1205 07:07:05.672684  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:07:05.672730  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.690043  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:05.792000  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:07:05.810085  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:07:05.827489  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:07:05.844988  384982 provision.go:87] duration metric: took 419.49922ms to configureAuth
	I1205 07:07:05.845013  384982 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:07:05.845213  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:05.845355  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.868784  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.868985  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.869010  384982 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:07:06.168481  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:07:06.168508  384982 machine.go:97] duration metric: took 4.234011493s to provisionDockerMachine
	I1205 07:07:06.168521  384982 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:07:06.168536  384982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:07:06.168593  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:07:06.168662  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.188502  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	W1205 07:07:02.207380  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:04.704952  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.643846313Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.985522972Z" level=info msg="Removing container: fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4" id=345829f2-22ae-4b36-8e2c-d0161a5076ac name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:22 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:22.995627716Z" level=info msg="Removed container fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=345829f2-22ae-4b36-8e2c-d0161a5076ac name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.927550206Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33bb909d-ad9f-45a0-a15a-a1b31f48c36b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.928415813Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c10b6111-3906-4d46-b1a0-d4c31e7b0c08 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.92934984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=b50ae9e6-dd2d-44b4-a40c-859696b4e300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.929468962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.936300077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.936982535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.974486779Z" level=info msg="Created container 5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=b50ae9e6-dd2d-44b4-a40c-859696b4e300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.975052966Z" level=info msg="Starting container: 5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842" id=5b25b25e-5b8c-439e-babd-3214e2d9f6cc name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:40 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:40.976965218Z" level=info msg="Started container" PID=1769 containerID=5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper id=5b25b25e-5b8c-439e-babd-3214e2d9f6cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7277646d740559e591f2f9afb3df4c057078e47c3f692ad83b4ebf699073e6d9
	Dec 05 07:06:41 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:41.037281639Z" level=info msg="Removing container: 4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c" id=47f3d5c6-5c11-4d55-a86c-40cffedd87fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:41 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:41.047838971Z" level=info msg="Removed container 4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j/dashboard-metrics-scraper" id=47f3d5c6-5c11-4d55-a86c-40cffedd87fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.045799421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6f9aff2-c3cb-47b9-81cf-a003b7103da1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.046908107Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc20722a-2b10-43d5-ae64-1723a62c7652 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.048007276Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3f5017a3-1aea-4b35-a7ab-f455e5a9c13e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.048137171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.052979963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053169731Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e47799f33d24a1f67784834f5f7cffe87343e233b9cf3bb1fadccfc5dae213fd/merged/etc/passwd: no such file or directory"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053199038Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e47799f33d24a1f67784834f5f7cffe87343e233b9cf3bb1fadccfc5dae213fd/merged/etc/group: no such file or directory"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.053475347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.083399094Z" level=info msg="Created container 0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4: kube-system/storage-provisioner/storage-provisioner" id=3f5017a3-1aea-4b35-a7ab-f455e5a9c13e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.084023095Z" level=info msg="Starting container: 0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4" id=c6dd6cc5-be50-40aa-bf54-e1cc409a8f25 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:06:43 default-k8s-diff-port-172186 crio[569]: time="2025-12-05T07:06:43.086389869Z" level=info msg="Started container" PID=1783 containerID=0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4 description=kube-system/storage-provisioner/storage-provisioner id=c6dd6cc5-be50-40aa-bf54-e1cc409a8f25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c373a7598bfe88c0ea0ac97a0d235e6d75b0a7080d30d5e916cd69faf92becc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0db2b232951f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   8c373a7598bfe       storage-provisioner                                    kube-system
	5a549ea68f1d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   7277646d74055       dashboard-metrics-scraper-6ffb444bf9-q4f9j             kubernetes-dashboard
	63ca6b04e977b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   5c6e3bb140e4f       kubernetes-dashboard-855c9754f9-2clpl                  kubernetes-dashboard
	a0e35c9119209       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   fa2c43f7954a9       busybox                                                default
	8ca6589a660b2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   760c97ab10851       coredns-66bc5c9577-lzlm8                               kube-system
	2b4ad487f94d0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   621f82a1bca8a       kindnet-w2mzg                                          kube-system
	3d055b1cda12d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   8c373a7598bfe       storage-provisioner                                    kube-system
	345ebcc959b75       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   ab236b05b6f1a       kube-proxy-fpss6                                       kube-system
	ed8de5e69d481       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   93ad286200d7f       kube-apiserver-default-k8s-diff-port-172186            kube-system
	b8424f7771088       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   b5de8684cef9f       etcd-default-k8s-diff-port-172186                      kube-system
	b75fc581167e9       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   571ee78b4c136       kube-scheduler-default-k8s-diff-port-172186            kube-system
	d42f7b44a3dec       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   4ebedf91780ac       kube-controller-manager-default-k8s-diff-port-172186   kube-system
	
	
	==> coredns [8ca6589a660b2e7ecdcaa10b0a47179aae45ca9174311253ee76dccda4795574] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45369 - 10480 "HINFO IN 7472493519933402814.3980104550805037176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.880662929s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-172186
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-172186
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=default-k8s-diff-port-172186
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-172186
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:06:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:06:42 +0000   Fri, 05 Dec 2025 07:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-172186
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                0c6d18bf-2e40-435b-9be8-d014e737e08c
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-lzlm8                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-172186                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-w2mzg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-172186             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-172186    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-fpss6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-172186             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q4f9j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2clpl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-172186 event: Registered Node default-k8s-diff-port-172186 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-172186 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-172186 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-172186 event: Registered Node default-k8s-diff-port-172186 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [b8424f777108894c3d90c6444a4cb21c9dab385dcfca8b378b0637e27eb4bd6f] <==
	{"level":"warn","ts":"2025-12-05T07:06:10.923170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.931749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.938851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.945946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.952262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.958454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.964814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.973627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.983248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.993413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:10.999302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.005404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.011410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.017511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.023924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.030232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.036294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.042495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.049171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.055595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.061950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.080528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.087114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.093926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:11.144912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:07 up  1:49,  0 user,  load average: 3.97, 3.40, 2.30
	Linux default-k8s-diff-port-172186 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b4ad487f94d05e6763801ea37294d3cda06090f5ad53f839147ad1672d2cf8d] <==
	I1205 07:06:12.424672       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:06:12.424939       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1205 07:06:12.425080       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:06:12.425094       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:06:12.425114       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:06:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:06:12.625580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:06:12.625615       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:06:12.625627       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:06:12.716886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:06:12.997369       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:06:12.997405       1 metrics.go:72] Registering metrics
	I1205 07:06:12.997537       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:22.625956       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:22.626062       1 main.go:301] handling current node
	I1205 07:06:32.627426       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:32.627472       1 main.go:301] handling current node
	I1205 07:06:42.626046       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:42.626078       1 main.go:301] handling current node
	I1205 07:06:52.628410       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:06:52.628446       1 main.go:301] handling current node
	I1205 07:07:02.634400       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1205 07:07:02.634430       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed8de5e69d48178f99d8fc4509335772d9301f83872fdafa6ee82b6e6883c141] <==
	I1205 07:06:11.586525       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:06:11.594015       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:06:11.594083       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:06:11.597643       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 07:06:11.597726       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 07:06:11.597775       1 aggregator.go:171] initial CRD sync complete...
	I1205 07:06:11.597783       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:06:11.597789       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:06:11.597794       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:06:11.607659       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1205 07:06:11.607683       1 policy_source.go:240] refreshing policies
	I1205 07:06:11.627309       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:06:11.805453       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:06:11.829456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:06:11.846536       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:06:11.854243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:06:11.859929       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:06:11.888664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.97.4"}
	I1205 07:06:11.897651       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.213.64"}
	I1205 07:06:12.489756       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:06:15.024171       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:15.024219       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:15.173065       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:06:15.173065       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:06:15.326073       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d42f7b44a3dec7cdfb77e71f8c1b0ea379df337d93c48967c985cfb5efc79957] <==
	I1205 07:06:14.884110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:14.890354       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:06:14.920254       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:06:14.920276       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:06:14.920284       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:06:14.920358       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:06:14.920368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:06:14.920382       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:06:14.920382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:06:14.920391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:06:14.920426       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 07:06:14.920431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:06:14.922426       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:06:14.924158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:06:14.924221       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:06:14.924305       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-172186"
	I1205 07:06:14.924406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:06:14.925162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:14.926343       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 07:06:14.928508       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:06:14.930642       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 07:06:14.931867       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 07:06:14.934201       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 07:06:14.938456       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:06:14.946751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [345ebcc959b75bb217149937541b68c27a98c41bdc6e9cf28541b7f32e891d5f] <==
	I1205 07:06:12.307130       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:06:12.374916       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:06:12.475174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:06:12.475221       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1205 07:06:12.475287       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:06:12.495383       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:06:12.495447       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:06:12.500813       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:06:12.501262       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:06:12.501301       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:12.504157       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:06:12.504237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:06:12.504292       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:06:12.504308       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:06:12.504369       1 config.go:200] "Starting service config controller"
	I1205 07:06:12.504376       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:06:12.504443       1 config.go:309] "Starting node config controller"
	I1205 07:06:12.504451       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:06:12.504458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:06:12.604872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:06:12.604928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:06:12.604935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b75fc581167e9dc3ab0503563eaf8c4d2824d2a1cb80aeb0d90ec0ccbe49c84e] <==
	I1205 07:06:10.061022       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:06:11.501624       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:06:11.501654       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:06:11.501666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:06:11.501676       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:06:11.543082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:06:11.543177       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:11.553912       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:06:11.553999       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:11.555345       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:11.554020       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:06:11.656060       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:06:15 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:15.640002     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxrb\" (UniqueName: \"kubernetes.io/projected/9c2c5d24-0653-4c1d-a7c2-b211e66230ec-kube-api-access-bcxrb\") pod \"dashboard-metrics-scraper-6ffb444bf9-q4f9j\" (UID: \"9c2c5d24-0653-4c1d-a7c2-b211e66230ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j"
	Dec 05 07:06:15 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:15.640036     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9c2c5d24-0653-4c1d-a7c2-b211e66230ec-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-q4f9j\" (UID: \"9c2c5d24-0653-4c1d-a7c2-b211e66230ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j"
	Dec 05 07:06:18 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:18.982235     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 05 07:06:19 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:19.988494     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2clpl" podStartSLOduration=1.795296081 podStartE2EDuration="4.988471079s" podCreationTimestamp="2025-12-05 07:06:15 +0000 UTC" firstStartedPulling="2025-12-05 07:06:15.871035734 +0000 UTC m=+7.031415627" lastFinishedPulling="2025-12-05 07:06:19.064210702 +0000 UTC m=+10.224590625" observedRunningTime="2025-12-05 07:06:19.98792839 +0000 UTC m=+11.148308292" watchObservedRunningTime="2025-12-05 07:06:19.988471079 +0000 UTC m=+11.148850981"
	Dec 05 07:06:21 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:21.979104     733 scope.go:117] "RemoveContainer" containerID="fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:22.984075     733 scope.go:117] "RemoveContainer" containerID="fbc94530dec6225c4a111ce6fcbf867064fa3662b41aba8b7a154faf2e6adbb4"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:22.984187     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:22 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:22.984399     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:23 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:23.987742     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:23 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:23.987912     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:27 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:27.716594     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:27 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:27.716762     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:40 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:40.926997     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:41.036116     733 scope.go:117] "RemoveContainer" containerID="4dda2e3d5abf03c78b3fc8ff9a4c42b8d7c64117fddf414b712ecd44876c6e9c"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:41.036333     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:41 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:41.036542     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:43 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:43.044816     733 scope.go:117] "RemoveContainer" containerID="3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641"
	Dec 05 07:06:47 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:47.717154     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:47 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:47.717383     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:06:59 default-k8s-diff-port-172186 kubelet[733]: I1205 07:06:59.927274     733 scope.go:117] "RemoveContainer" containerID="5a549ea68f1d943bee76c1d6675a725180c81963d06f5b65ff8771feee5fe842"
	Dec 05 07:06:59 default-k8s-diff-port-172186 kubelet[733]: E1205 07:06:59.927523     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q4f9j_kubernetes-dashboard(9c2c5d24-0653-4c1d-a7c2-b211e66230ec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q4f9j" podUID="9c2c5d24-0653-4c1d-a7c2-b211e66230ec"
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:07:02 default-k8s-diff-port-172186 systemd[1]: kubelet.service: Consumed 1.629s CPU time.
	
	
	==> kubernetes-dashboard [63ca6b04e977b79b30859dcf6992da6b3a0f31873efd6b199ad9754419183484] <==
	2025/12/05 07:06:19 Starting overwatch
	2025/12/05 07:06:19 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:19 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:19 Using secret token for csrf signing
	2025/12/05 07:06:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:19 Successful initial request to the apiserver, version: v1.34.2
	2025/12/05 07:06:19 Generating JWE encryption key
	2025/12/05 07:06:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:19 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:19 Creating in-cluster Sidecar client
	2025/12/05 07:06:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:19 Serving insecurely on HTTP port: 9090
	2025/12/05 07:06:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0db2b232951f12f19adc1985fccb4c59cfe127c396e13ed58f3d14e9faa433d4] <==
	I1205 07:06:43.101120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:06:43.108982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:06:43.109023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:06:43.110991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:46.566581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:50.826682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:54.425518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:06:57.479849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.502495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.506824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:00.506996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:07:00.507160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171!
	I1205 07:07:00.507142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73acd703-8958-4c9b-a71e-6ab66433bd8b", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171 became leader
	W1205 07:07:00.509227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:00.512521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:00.607438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-172186_2194eba6-4b3c-4e19-bbad-a506568ce171!
	W1205 07:07:02.515872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:02.519818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:04.523219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:04.528424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:06.534972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:06.540461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3d055b1cda12db6333c4b7b7e4344c3b23a3f4ec76f76fce308840302458b641] <==
	I1205 07:06:12.280671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:06:42.282932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186: exit status 2 (340.423192ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-624263 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-624263 --alsologtostderr -v=1: exit status 80 (1.963385054s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-624263 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:07:12.159218  389109 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:12.159455  389109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:12.159462  389109 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:12.159466  389109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:12.159626  389109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:12.159834  389109 out.go:368] Setting JSON to false
	I1205 07:07:12.159850  389109 mustload.go:66] Loading cluster: newest-cni-624263
	I1205 07:07:12.160184  389109 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:12.160552  389109 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:12.182170  389109 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:12.182501  389109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:12.247996  389109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:12.235125138 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:12.248883  389109 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-624263 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:07:12.250826  389109 out.go:179] * Pausing node newest-cni-624263 ... 
	I1205 07:07:12.251957  389109 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:12.252301  389109 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:12.252361  389109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:12.275815  389109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:12.378289  389109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:12.389787  389109 pause.go:52] kubelet running: true
	I1205 07:07:12.389843  389109 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:12.528417  389109 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:12.528502  389109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:12.591628  389109 cri.go:89] found id: "af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4"
	I1205 07:07:12.591659  389109 cri.go:89] found id: "8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf"
	I1205 07:07:12.591666  389109 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:12.591670  389109 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:12.591676  389109 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:12.591687  389109 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:12.591691  389109 cri.go:89] found id: ""
	I1205 07:07:12.591740  389109 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:12.603114  389109 retry.go:31] will retry after 242.546085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:12Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:12.846612  389109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:12.859001  389109 pause.go:52] kubelet running: false
	I1205 07:07:12.859074  389109 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:12.967225  389109 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:12.967312  389109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:13.031382  389109 cri.go:89] found id: "af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4"
	I1205 07:07:13.031403  389109 cri.go:89] found id: "8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf"
	I1205 07:07:13.031407  389109 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:13.031411  389109 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:13.031414  389109 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:13.031417  389109 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:13.031420  389109 cri.go:89] found id: ""
	I1205 07:07:13.031454  389109 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:13.042524  389109 retry.go:31] will retry after 255.018264ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:13Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:13.297975  389109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:13.312419  389109 pause.go:52] kubelet running: false
	I1205 07:07:13.312492  389109 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:13.440683  389109 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:13.440896  389109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:13.502626  389109 cri.go:89] found id: "af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4"
	I1205 07:07:13.502648  389109 cri.go:89] found id: "8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf"
	I1205 07:07:13.502652  389109 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:13.502656  389109 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:13.502658  389109 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:13.502662  389109 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:13.502665  389109 cri.go:89] found id: ""
	I1205 07:07:13.502700  389109 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:13.515346  389109 retry.go:31] will retry after 324.090776ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:13Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:13.839854  389109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:13.852283  389109 pause.go:52] kubelet running: false
	I1205 07:07:13.852342  389109 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:13.968947  389109 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:13.969018  389109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:14.034758  389109 cri.go:89] found id: "af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4"
	I1205 07:07:14.034784  389109 cri.go:89] found id: "8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf"
	I1205 07:07:14.034790  389109 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:14.034795  389109 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:14.034800  389109 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:14.034804  389109 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:14.034809  389109 cri.go:89] found id: ""
	I1205 07:07:14.034845  389109 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:14.047912  389109 out.go:203] 
	W1205 07:07:14.048998  389109 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:07:14.049012  389109 out.go:285] * 
	* 
	W1205 07:07:14.053377  389109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:07:14.054556  389109 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-624263 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-624263
helpers_test.go:243: (dbg) docker inspect newest-cni-624263:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	        "Created": "2025-12-05T07:06:27.282785748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:07:01.680152086Z",
	            "FinishedAt": "2025-12-05T07:07:00.574703416Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hosts",
	        "LogPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384-json.log",
	        "Name": "/newest-cni-624263",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-624263:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-624263",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	                "LowerDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-624263",
	                "Source": "/var/lib/docker/volumes/newest-cni-624263/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-624263",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-624263",
	                "name.minikube.sigs.k8s.io": "newest-cni-624263",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9065392d712d9ce4b082a6ba7159a8ffb34096cff642d19c37cd1aab5b914e2d",
	            "SandboxKey": "/var/run/docker/netns/9065392d712d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-624263": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94030ad1138d9a442ae2471b64631306bc41b223df756631ceb53e7e7a11b469",
	                    "EndpointID": "f2b13347a58d11c836de7ee6c7b8c28ae100f4f05a4bccf59236599f726b714c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "32:57:60:86:29:ff",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-624263",
	                        "4f54f5052bf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263: exit status 2 (312.261775ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-624263 logs -n 25
E1205 07:07:15.043286   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ newest-cni-624263 image list --format=json                                                                                                                                                                                                           │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:07:01.656485  384982 out.go:252] * Restarting existing docker container for "newest-cni-624263" ...
	I1205 07:07:01.656540  384982 cli_runner.go:164] Run: docker start newest-cni-624263
	I1205 07:07:01.895199  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.914785  384982 kic.go:430] container "newest-cni-624263" state is running.
	I1205 07:07:01.915225  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:01.934239  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.934479  384982 machine.go:94] provisionDockerMachine start ...
	I1205 07:07:01.934568  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:01.952380  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:01.952665  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:01.952679  384982 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:07:01.953292  384982 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55518->127.0.0.1:33138: read: connection reset by peer
	I1205 07:07:05.092419  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.092445  384982 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:07:05.092491  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.112429  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.112718  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.112739  384982 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:07:05.265486  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.265582  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.285453  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.285689  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.285716  384982 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:07:05.425411  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:07:05.425436  384982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:07:05.425464  384982 ubuntu.go:190] setting up certificates
	I1205 07:07:05.425475  384982 provision.go:84] configureAuth start
	I1205 07:07:05.425532  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:05.443549  384982 provision.go:143] copyHostCerts
	I1205 07:07:05.443614  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:07:05.443629  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:07:05.443700  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:07:05.443800  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:07:05.443816  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:07:05.443845  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:07:05.443904  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:07:05.443915  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:07:05.443950  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:07:05.444023  384982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:07:05.672635  384982 provision.go:177] copyRemoteCerts
	I1205 07:07:05.672684  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:07:05.672730  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.690043  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:05.792000  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:07:05.810085  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:07:05.827489  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:07:05.844988  384982 provision.go:87] duration metric: took 419.49922ms to configureAuth
	I1205 07:07:05.845013  384982 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:07:05.845213  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:05.845355  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.868784  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.868985  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.869010  384982 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:07:06.168481  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:07:06.168508  384982 machine.go:97] duration metric: took 4.234011493s to provisionDockerMachine
	I1205 07:07:06.168521  384982 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:07:06.168536  384982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:07:06.168593  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:07:06.168662  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.188502  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	W1205 07:07:02.207380  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:04.704952  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:06.292387  384982 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:07:06.295922  384982 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:07:06.295950  384982 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:07:06.295961  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:07:06.296006  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:07:06.296104  384982 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:07:06.296231  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:07:06.303904  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:06.321264  384982 start.go:296] duration metric: took 152.731097ms for postStartSetup
	I1205 07:07:06.321343  384982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:06.321386  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.342624  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.439978  384982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:07:06.444248  384982 fix.go:56] duration metric: took 4.8080316s for fixHost
	I1205 07:07:06.444268  384982 start.go:83] releasing machines lock for "newest-cni-624263", held for 4.808068962s
	I1205 07:07:06.444356  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:06.461188  384982 ssh_runner.go:195] Run: cat /version.json
	I1205 07:07:06.461224  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.461315  384982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:07:06.461389  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.479772  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.480279  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.758196  384982 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:06.764592  384982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:07:06.798459  384982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:07:06.802811  384982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:07:06.802860  384982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:07:06.810439  384982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:07:06.810458  384982 start.go:496] detecting cgroup driver to use...
	I1205 07:07:06.810483  384982 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:07:06.810515  384982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:07:06.823596  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:07:06.835347  384982 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:07:06.835386  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:07:06.849102  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:07:06.861013  384982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:07:06.946233  384982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:07:07.034814  384982 docker.go:234] disabling docker service ...
	I1205 07:07:07.034859  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:07:07.048490  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:07:07.062338  384982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:07:07.152172  384982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:07:07.242359  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:07:07.254816  384982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:07:07.268657  384982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:07:07.268723  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.277649  384982 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:07:07.277721  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.287203  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.296720  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.305673  384982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:07:07.314603  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.323209  384982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.331118  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.339939  384982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:07:07.346935  384982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:07:07.354783  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.445879  384982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:07:07.588541  384982 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:07:07.588604  384982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:07:07.594687  384982 start.go:564] Will wait 60s for crictl version
	I1205 07:07:07.595153  384982 ssh_runner.go:195] Run: which crictl
	I1205 07:07:07.598691  384982 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:07:07.626384  384982 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:07:07.626465  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.656627  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.691598  384982 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:07:07.692738  384982 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:07:07.715101  384982 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:07:07.719286  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.731914  384982 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:07:07.733217  384982 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:07:07.733394  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:07.733451  384982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:07:07.764980  384982 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:07:07.765003  384982 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:07:07.765012  384982 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:07:07.765132  384982 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:07:07.765207  384982 ssh_runner.go:195] Run: crio config
	I1205 07:07:07.812534  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:07.812555  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:07.812573  384982 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:07:07.812604  384982 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:07:07.812765  384982 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:07:07.812831  384982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:07:07.820594  384982 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:07:07.820653  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:07:07.828109  384982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:07:07.840571  384982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:07:07.852346  384982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:07:07.864062  384982 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:07:07.867420  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.876647  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.969578  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:07.991685  384982 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:07:07.991713  384982 certs.go:195] generating shared ca certs ...
	I1205 07:07:07.991735  384982 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:07.991888  384982 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:07:07.991947  384982 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:07:07.991961  384982 certs.go:257] generating profile certs ...
	I1205 07:07:07.992079  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:07:07.992226  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:07:07.992293  384982 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:07:07.992512  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:07:07.992567  384982 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:07:07.992584  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:07:07.992622  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:07:07.992661  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:07:07.992697  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:07:07.992768  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:07.993641  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:07:08.013632  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:07:08.033788  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:07:08.054106  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:07:08.078883  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:07:08.099768  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:07:08.116845  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:07:08.135382  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:07:08.152628  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:07:08.169338  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:07:08.186981  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:07:08.206005  384982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:07:08.218973  384982 ssh_runner.go:195] Run: openssl version
	I1205 07:07:08.224889  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.231834  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:07:08.238627  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242398  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242447  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.277264  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:07:08.284110  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.290922  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:07:08.298213  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301760  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301803  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.338438  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:07:08.345749  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.353668  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:07:08.361252  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364769  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364816  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.405377  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:07:08.413075  384982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:07:08.416868  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:07:08.453487  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:07:08.487644  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:07:08.533187  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:07:08.593546  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:07:08.653721  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:07:08.709159  384982 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:08.709282  384982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:07:08.709349  384982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:07:08.737962  384982 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:08.737982  384982 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:08.737987  384982 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:08.737992  384982 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:08.737996  384982 cri.go:89] found id: ""
	I1205 07:07:08.738037  384982 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:07:08.749927  384982 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:08.750001  384982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:07:08.757435  384982 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:07:08.757451  384982 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:07:08.757493  384982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:07:08.764462  384982 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:07:08.765259  384982 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-624263" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.765847  384982 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-624263" cluster setting kubeconfig missing "newest-cni-624263" context setting]
	I1205 07:07:08.766845  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.768427  384982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:07:08.775598  384982 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:07:08.775623  384982 kubeadm.go:602] duration metric: took 18.165924ms to restartPrimaryControlPlane
	I1205 07:07:08.775632  384982 kubeadm.go:403] duration metric: took 66.480576ms to StartCluster
	I1205 07:07:08.775648  384982 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.775713  384982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.777693  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.777931  384982 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:07:08.777993  384982 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:07:08.778091  384982 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-624263"
	I1205 07:07:08.778111  384982 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-624263"
	W1205 07:07:08.778120  384982 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:07:08.778116  384982 addons.go:70] Setting dashboard=true in profile "newest-cni-624263"
	I1205 07:07:08.778140  384982 addons.go:239] Setting addon dashboard=true in "newest-cni-624263"
	W1205 07:07:08.778150  384982 addons.go:248] addon dashboard should already be in state true
	I1205 07:07:08.778164  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:08.778186  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778150  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778139  384982 addons.go:70] Setting default-storageclass=true in profile "newest-cni-624263"
	I1205 07:07:08.778303  384982 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-624263"
	I1205 07:07:08.778585  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778752  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778783  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.779765  384982 out.go:179] * Verifying Kubernetes components...
	I1205 07:07:08.780933  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:08.804889  384982 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:07:08.804889  384982 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:07:08.806580  384982 addons.go:239] Setting addon default-storageclass=true in "newest-cni-624263"
	W1205 07:07:08.806597  384982 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:07:08.806617  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.806903  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.807441  384982 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.807461  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:07:08.807530  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.808424  384982 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:07:08.809309  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:07:08.809353  384982 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:07:08.809407  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.834751  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.836077  384982 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.836291  384982 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:07:08.837052  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.842660  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.859675  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.933525  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:08.947274  384982 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:07:08.947358  384982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:08.951314  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:07:08.951373  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:07:08.952715  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.960188  384982 api_server.go:72] duration metric: took 182.229824ms to wait for apiserver process to appear ...
	I1205 07:07:08.960210  384982 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:07:08.960226  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:08.965821  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:07:08.965841  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:07:08.967346  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.980049  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:07:08.980067  384982 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:07:08.994281  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:07:08.994299  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:07:09.008287  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:07:09.008306  384982 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:07:09.021481  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:07:09.021501  384982 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:07:09.034096  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:07:09.034115  384982 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:07:09.046446  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:07:09.046466  384982 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:07:09.058389  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:09.058405  384982 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:07:09.070248  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:10.183992  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 07:07:10.184023  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 07:07:10.184136  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.262013  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.262086  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.460707  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.465761  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.465796  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.858674166s)
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.8440466s)
	I1205 07:07:10.811561  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.741287368s)
	I1205 07:07:10.815716  384982 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-624263 addons enable metrics-server
	
	I1205 07:07:10.822997  384982 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1205 07:07:10.824128  384982 addons.go:530] duration metric: took 2.046144375s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:07:10.961075  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.965412  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.965439  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:11.461149  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:11.465102  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:07:11.466004  384982 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:07:11.466025  384982 api_server.go:131] duration metric: took 2.505809422s to wait for apiserver health ...
	I1205 07:07:11.466034  384982 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:07:11.469408  384982 system_pods.go:59] 8 kube-system pods found
	I1205 07:07:11.469441  384982 system_pods.go:61] "coredns-7d764666f9-jkmhj" [126785e3-c7a3-451f-ac72-e05d87bb32f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469449  384982 system_pods.go:61] "etcd-newest-cni-624263" [9a4fe128-6030-4681-b201-a2a13ac29474] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:07:11.469475  384982 system_pods.go:61] "kindnet-fctwl" [29a59939-b66c-4796-9a9e-e1b442bccf1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:07:11.469490  384982 system_pods.go:61] "kube-apiserver-newest-cni-624263" [2fc9852f-c8d5-41c2-8dbe-41056e227d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:07:11.469499  384982 system_pods.go:61] "kube-controller-manager-newest-cni-624263" [957b864f-8ee5-40ce-9e1f-4396041c4525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:07:11.469510  384982 system_pods.go:61] "kube-proxy-8v5qr" [59595bdd-49dc-4491-b494-1c48474ea8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:07:11.469520  384982 system_pods.go:61] "kube-scheduler-newest-cni-624263" [a3c04907-1ac1-43af-827b-b4ab46dd553c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:07:11.469533  384982 system_pods.go:61] "storage-provisioner" [1cfc97af-739e-4ee9-838a-75962c29bc63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469542  384982 system_pods.go:74] duration metric: took 3.503315ms to wait for pod list to return data ...
	I1205 07:07:11.469551  384982 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:07:11.471664  384982 default_sa.go:45] found service account: "default"
	I1205 07:07:11.471681  384982 default_sa.go:55] duration metric: took 2.121784ms for default service account to be created ...
	I1205 07:07:11.471691  384982 kubeadm.go:587] duration metric: took 2.693735692s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:11.471704  384982 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:07:11.473883  384982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:07:11.473903  384982 node_conditions.go:123] node cpu capacity is 8
	I1205 07:07:11.473915  384982 node_conditions.go:105] duration metric: took 2.207592ms to run NodePressure ...
	I1205 07:07:11.473924  384982 start.go:242] waiting for startup goroutines ...
	I1205 07:07:11.473931  384982 start.go:247] waiting for cluster config update ...
	I1205 07:07:11.473942  384982 start.go:256] writing updated cluster config ...
	I1205 07:07:11.474153  384982 ssh_runner.go:195] Run: rm -f paused
	I1205 07:07:11.522329  384982 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:07:11.524757  384982 out.go:179] * Done! kubectl is now configured to use "newest-cni-624263" cluster and "default" namespace by default
	W1205 07:07:06.706696  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:08.706849  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:10.705104  375543 pod_ready.go:94] pod "coredns-66bc5c9577-rg55r" is "Ready"
	I1205 07:07:10.705136  375543 pod_ready.go:86] duration metric: took 31.504740744s for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.707363  375543 pod_ready.go:83] waiting for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.711598  375543 pod_ready.go:94] pod "etcd-embed-certs-770390" is "Ready"
	I1205 07:07:10.711616  375543 pod_ready.go:86] duration metric: took 4.234427ms for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.713476  375543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.717163  375543 pod_ready.go:94] pod "kube-apiserver-embed-certs-770390" is "Ready"
	I1205 07:07:10.717181  375543 pod_ready.go:86] duration metric: took 3.676871ms for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.719115  375543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.903969  375543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-770390" is "Ready"
	I1205 07:07:10.903993  375543 pod_ready.go:86] duration metric: took 184.859493ms for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.104836  375543 pod_ready.go:83] waiting for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.504196  375543 pod_ready.go:94] pod "kube-proxy-7bjnn" is "Ready"
	I1205 07:07:11.504227  375543 pod_ready.go:86] duration metric: took 399.358917ms for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.703987  375543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103435  375543 pod_ready.go:94] pod "kube-scheduler-embed-certs-770390" is "Ready"
	I1205 07:07:12.103462  375543 pod_ready.go:86] duration metric: took 399.448083ms for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103479  375543 pod_ready.go:40] duration metric: took 32.906123608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:07:12.153648  375543 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:07:12.156415  375543 out.go:179] * Done! kubectl is now configured to use "embed-certs-770390" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.369705791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.372516097Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ab3d6aad-91f8-4320-b7fe-6263b7982596 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.373163996Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=352bedaf-2ced-4e95-91ab-82e20a884b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.373868319Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.374504174Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.374796908Z" level=info msg="Ran pod sandbox 3907434eb54becc5229939bd66f17481d6fe0dc1acad139365172c3c35f75bb7 with infra container: kube-system/kindnet-fctwl/POD" id=ab3d6aad-91f8-4320-b7fe-6263b7982596 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.375278921Z" level=info msg="Ran pod sandbox 8f67727f97493cd5f1f1132e5371fc981725a2110f2e2e0386530b77bf44559e with infra container: kube-system/kube-proxy-8v5qr/POD" id=352bedaf-2ced-4e95-91ab-82e20a884b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.375797349Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0d0498f1-e01b-4748-839c-dd0f804e9912 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.376115224Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1ecaebba-d66c-45a1-b32e-c05e66ea1a66 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.376677161Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c1c992f-ef5d-4cb5-9630-d2963888fc1e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377037561Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=889b9399-72dd-445d-8f42-932bde7cfcdb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377757848Z" level=info msg="Creating container: kube-system/kindnet-fctwl/kindnet-cni" id=60d05929-03e1-4bc3-99eb-5faa32cb5609 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377846902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377930722Z" level=info msg="Creating container: kube-system/kube-proxy-8v5qr/kube-proxy" id=09c058a1-fdc4-4b93-a44c-f9e7c357a649 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.378063382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.381924377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.382497732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.384201699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.384683839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.409840313Z" level=info msg="Created container 8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf: kube-system/kindnet-fctwl/kindnet-cni" id=60d05929-03e1-4bc3-99eb-5faa32cb5609 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.410348584Z" level=info msg="Starting container: 8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf" id=e38ed321-f72a-4ecb-addb-edbdfccb7522 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.412086564Z" level=info msg="Started container" PID=1045 containerID=8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf description=kube-system/kindnet-fctwl/kindnet-cni id=e38ed321-f72a-4ecb-addb-edbdfccb7522 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3907434eb54becc5229939bd66f17481d6fe0dc1acad139365172c3c35f75bb7
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.414344948Z" level=info msg="Created container af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4: kube-system/kube-proxy-8v5qr/kube-proxy" id=09c058a1-fdc4-4b93-a44c-f9e7c357a649 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.414824568Z" level=info msg="Starting container: af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4" id=11c9f22b-2a23-432b-a3d6-55f9dacb25b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.417677733Z" level=info msg="Started container" PID=1046 containerID=af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4 description=kube-system/kube-proxy-8v5qr/kube-proxy id=11c9f22b-2a23-432b-a3d6-55f9dacb25b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f67727f97493cd5f1f1132e5371fc981725a2110f2e2e0386530b77bf44559e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	af4221aba90f3       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   3 seconds ago       Running             kube-proxy                1                   8f67727f97493       kube-proxy-8v5qr                            kube-system
	8be704aa57ce4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   3907434eb54be       kindnet-fctwl                               kube-system
	d0abfce5c087b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   eca55194a02a1       etcd-newest-cni-624263                      kube-system
	b7dd1526bcbcd       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   990022e5d8b06       kube-apiserver-newest-cni-624263            kube-system
	ff2c7439c6494       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   2dffcf88ee1f7       kube-controller-manager-newest-cni-624263   kube-system
	5bbad9411c173       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   d12e7ea652633       kube-scheduler-newest-cni-624263            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-624263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-624263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=newest-cni-624263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_06_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:06:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-624263
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:07:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-624263
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                74ead395-c6a4-4eb4-a8b4-1e768c64ff0f
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-624263                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-fctwl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-624263             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-624263    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-8v5qr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-624263             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-624263 event: Registered Node newest-cni-624263 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-624263 event: Registered Node newest-cni-624263 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48] <==
	{"level":"warn","ts":"2025-12-05T07:07:09.557514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.563625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.576598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.584248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.591221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.598200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.606086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.613535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.619914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.626530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.640443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.647537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.654013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.660022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.666753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.673759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.683009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.692305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.700151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.708140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.720158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.740403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.746734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.754948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.805599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:15 up  1:49,  0 user,  load average: 3.65, 3.35, 2.30
	Linux newest-cni-624263 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf] <==
	I1205 07:07:11.606019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:07:11.698303       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1205 07:07:11.698456       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:07:11.698474       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:07:11.698499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:07:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:07:11.899011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:07:11.899084       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:07:11.899106       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:07:11.899266       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:07:12.299236       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:07:12.299406       1 metrics.go:72] Registering metrics
	I1205 07:07:12.299547       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0] <==
	I1205 07:07:10.276580       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:07:10.276587       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:07:10.276594       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:07:10.276812       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 07:07:10.276818       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:10.276843       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:07:10.276822       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:07:10.277069       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:07:10.277880       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:07:10.283400       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1205 07:07:10.292624       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 07:07:10.304238       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:07:10.327771       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:07:10.547689       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:07:10.579897       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:07:10.600705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:07:10.610733       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:07:10.620660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:07:10.658458       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.112.38"}
	I1205 07:07:10.669424       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.92.171"}
	I1205 07:07:11.179774       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:07:13.909546       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:07:14.009661       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:07:14.059830       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:07:14.111041       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f] <==
	I1205 07:07:13.411823       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411835       1 range_allocator.go:177] "Sending events to api server"
	I1205 07:07:13.411798       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411859       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1205 07:07:13.411865       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:13.411874       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411890       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411942       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411969       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412014       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412025       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412097       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412256       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412269       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412545       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412711       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412909       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.413092       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.413130       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.415314       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.419797       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:13.511399       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.511417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:07:13.511423       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:07:13.520061       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4] <==
	I1205 07:07:11.450702       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:07:11.523058       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:11.623266       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:11.623343       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1205 07:07:11.623498       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:07:11.643042       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:07:11.643091       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:07:11.648007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:07:11.648419       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:07:11.648460       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:07:11.649877       1 config.go:200] "Starting service config controller"
	I1205 07:07:11.649903       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:07:11.649905       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:07:11.649920       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:07:11.649941       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:07:11.649955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:07:11.650048       1 config.go:309] "Starting node config controller"
	I1205 07:07:11.650106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:07:11.650119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:07:11.749997       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:07:11.750026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:07:11.750065       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c] <==
	I1205 07:07:08.765195       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:07:10.204204       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:07:10.204259       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:07:10.204280       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:07:10.204289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:07:10.257218       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1205 07:07:10.257299       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:07:10.260081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:07:10.260115       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:10.260220       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:07:10.260405       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:07:10.360462       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.292737     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.301235     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-624263\" already exists" pod="kube-system/kube-apiserver-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.301269     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304374     666 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304470     666 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304506     666 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.305288     666 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.307692     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-624263\" already exists" pod="kube-system/kube-controller-manager-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.307724     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.316011     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-624263\" already exists" pod="kube-system/kube-scheduler-newest-cni-624263"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.060622     666 apiserver.go:52] "Watching apiserver"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.065561     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-624263" containerName="kube-controller-manager"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.068075     666 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106025     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-624263" containerName="kube-apiserver"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106136     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-624263" containerName="etcd"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106401     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-624263" containerName="kube-scheduler"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122517     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-lib-modules\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122564     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-cni-cfg\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122588     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-lib-modules\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122632     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-xtables-lock\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122670     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-xtables-lock\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:12 newest-cni-624263 kubelet[666]: E1205 07:07:12.217880     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-624263" containerName="etcd"
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-624263 -n newest-cni-624263: exit status 2 (306.918619ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-624263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph: exit status 1 (56.458199ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jkmhj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-fzkdj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-h2xph" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-624263
helpers_test.go:243: (dbg) docker inspect newest-cni-624263:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	        "Created": "2025-12-05T07:06:27.282785748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:07:01.680152086Z",
	            "FinishedAt": "2025-12-05T07:07:00.574703416Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/hosts",
	        "LogPath": "/var/lib/docker/containers/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384/4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384-json.log",
	        "Name": "/newest-cni-624263",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-624263:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-624263",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f54f5052bf2e50393030cbd7aeff3bf5987d62c81095ba1019eea93e18ea384",
	                "LowerDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09481e444986447831032a2dc4e857f0e7a78aa4ad30a4066af92bdb84215efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-624263",
	                "Source": "/var/lib/docker/volumes/newest-cni-624263/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-624263",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-624263",
	                "name.minikube.sigs.k8s.io": "newest-cni-624263",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9065392d712d9ce4b082a6ba7159a8ffb34096cff642d19c37cd1aab5b914e2d",
	            "SandboxKey": "/var/run/docker/netns/9065392d712d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-624263": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94030ad1138d9a442ae2471b64631306bc41b223df756631ceb53e7e7a11b469",
	                    "EndpointID": "f2b13347a58d11c836de7ee6c7b8c28ae100f4f05a4bccf59236599f726b714c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "32:57:60:86:29:ff",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-624263",
	                        "4f54f5052bf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263: exit status 2 (304.715819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-624263 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:05 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-770390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ newest-cni-624263 image list --format=json                                                                                                                                                                                                           │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:07:01.656485  384982 out.go:252] * Restarting existing docker container for "newest-cni-624263" ...
	I1205 07:07:01.656540  384982 cli_runner.go:164] Run: docker start newest-cni-624263
	I1205 07:07:01.895199  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.914785  384982 kic.go:430] container "newest-cni-624263" state is running.
	I1205 07:07:01.915225  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:01.934239  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.934479  384982 machine.go:94] provisionDockerMachine start ...
	I1205 07:07:01.934568  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:01.952380  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:01.952665  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:01.952679  384982 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:07:01.953292  384982 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55518->127.0.0.1:33138: read: connection reset by peer
	I1205 07:07:05.092419  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.092445  384982 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:07:05.092491  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.112429  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.112718  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.112739  384982 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:07:05.265486  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.265582  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.285453  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.285689  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.285716  384982 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:07:05.425411  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:07:05.425436  384982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:07:05.425464  384982 ubuntu.go:190] setting up certificates
	I1205 07:07:05.425475  384982 provision.go:84] configureAuth start
	I1205 07:07:05.425532  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:05.443549  384982 provision.go:143] copyHostCerts
	I1205 07:07:05.443614  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:07:05.443629  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:07:05.443700  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:07:05.443800  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:07:05.443816  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:07:05.443845  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:07:05.443904  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:07:05.443915  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:07:05.443950  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:07:05.444023  384982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:07:05.672635  384982 provision.go:177] copyRemoteCerts
	I1205 07:07:05.672684  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:07:05.672730  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.690043  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:05.792000  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:07:05.810085  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:07:05.827489  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:07:05.844988  384982 provision.go:87] duration metric: took 419.49922ms to configureAuth
	I1205 07:07:05.845013  384982 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:07:05.845213  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:05.845355  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.868784  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.868985  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.869010  384982 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:07:06.168481  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:07:06.168508  384982 machine.go:97] duration metric: took 4.234011493s to provisionDockerMachine
	I1205 07:07:06.168521  384982 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:07:06.168536  384982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:07:06.168593  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:07:06.168662  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.188502  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	W1205 07:07:02.207380  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:04.704952  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:06.292387  384982 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:07:06.295922  384982 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:07:06.295950  384982 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:07:06.295961  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:07:06.296006  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:07:06.296104  384982 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:07:06.296231  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:07:06.303904  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:06.321264  384982 start.go:296] duration metric: took 152.731097ms for postStartSetup
	I1205 07:07:06.321343  384982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:06.321386  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.342624  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.439978  384982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:07:06.444248  384982 fix.go:56] duration metric: took 4.8080316s for fixHost
	I1205 07:07:06.444268  384982 start.go:83] releasing machines lock for "newest-cni-624263", held for 4.808068962s
	I1205 07:07:06.444356  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:06.461188  384982 ssh_runner.go:195] Run: cat /version.json
	I1205 07:07:06.461224  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.461315  384982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:07:06.461389  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.479772  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.480279  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.758196  384982 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:06.764592  384982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:07:06.798459  384982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:07:06.802811  384982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:07:06.802860  384982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:07:06.810439  384982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:07:06.810458  384982 start.go:496] detecting cgroup driver to use...
	I1205 07:07:06.810483  384982 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:07:06.810515  384982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:07:06.823596  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:07:06.835347  384982 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:07:06.835386  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:07:06.849102  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:07:06.861013  384982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:07:06.946233  384982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:07:07.034814  384982 docker.go:234] disabling docker service ...
	I1205 07:07:07.034859  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:07:07.048490  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:07:07.062338  384982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:07:07.152172  384982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:07:07.242359  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:07:07.254816  384982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:07:07.268657  384982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:07:07.268723  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.277649  384982 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:07:07.277721  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.287203  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.296720  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.305673  384982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:07:07.314603  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.323209  384982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.331118  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.339939  384982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:07:07.346935  384982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:07:07.354783  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.445879  384982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:07:07.588541  384982 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:07:07.588604  384982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:07:07.594687  384982 start.go:564] Will wait 60s for crictl version
	I1205 07:07:07.595153  384982 ssh_runner.go:195] Run: which crictl
	I1205 07:07:07.598691  384982 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:07:07.626384  384982 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:07:07.626465  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.656627  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.691598  384982 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:07:07.692738  384982 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:07:07.715101  384982 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:07:07.719286  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.731914  384982 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:07:07.733217  384982 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:07:07.733394  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:07.733451  384982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:07:07.764980  384982 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:07:07.765003  384982 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:07:07.765012  384982 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:07:07.765132  384982 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:07:07.765207  384982 ssh_runner.go:195] Run: crio config
	I1205 07:07:07.812534  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:07.812555  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:07.812573  384982 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:07:07.812604  384982 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:07:07.812765  384982 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:07:07.812831  384982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:07:07.820594  384982 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:07:07.820653  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:07:07.828109  384982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:07:07.840571  384982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:07:07.852346  384982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:07:07.864062  384982 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:07:07.867420  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.876647  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.969578  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:07.991685  384982 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:07:07.991713  384982 certs.go:195] generating shared ca certs ...
	I1205 07:07:07.991735  384982 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:07.991888  384982 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:07:07.991947  384982 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:07:07.991961  384982 certs.go:257] generating profile certs ...
	I1205 07:07:07.992079  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:07:07.992226  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:07:07.992293  384982 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:07:07.992512  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:07:07.992567  384982 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:07:07.992584  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:07:07.992622  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:07:07.992661  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:07:07.992697  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:07:07.992768  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:07.993641  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:07:08.013632  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:07:08.033788  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:07:08.054106  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:07:08.078883  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:07:08.099768  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:07:08.116845  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:07:08.135382  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:07:08.152628  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:07:08.169338  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:07:08.186981  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:07:08.206005  384982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:07:08.218973  384982 ssh_runner.go:195] Run: openssl version
	I1205 07:07:08.224889  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.231834  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:07:08.238627  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242398  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242447  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.277264  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:07:08.284110  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.290922  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:07:08.298213  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301760  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301803  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.338438  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:07:08.345749  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.353668  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:07:08.361252  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364769  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364816  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.405377  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:07:08.413075  384982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:07:08.416868  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:07:08.453487  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:07:08.487644  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:07:08.533187  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:07:08.593546  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:07:08.653721  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:07:08.709159  384982 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:08.709282  384982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:07:08.709349  384982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:07:08.737962  384982 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:08.737982  384982 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:08.737987  384982 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:08.737992  384982 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:08.737996  384982 cri.go:89] found id: ""
	I1205 07:07:08.738037  384982 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:07:08.749927  384982 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:08.750001  384982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:07:08.757435  384982 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:07:08.757451  384982 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:07:08.757493  384982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:07:08.764462  384982 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:07:08.765259  384982 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-624263" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.765847  384982 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-624263" cluster setting kubeconfig missing "newest-cni-624263" context setting]
	I1205 07:07:08.766845  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.768427  384982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:07:08.775598  384982 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:07:08.775623  384982 kubeadm.go:602] duration metric: took 18.165924ms to restartPrimaryControlPlane
	I1205 07:07:08.775632  384982 kubeadm.go:403] duration metric: took 66.480576ms to StartCluster
	I1205 07:07:08.775648  384982 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.775713  384982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.777693  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.777931  384982 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:07:08.777993  384982 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:07:08.778091  384982 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-624263"
	I1205 07:07:08.778111  384982 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-624263"
	W1205 07:07:08.778120  384982 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:07:08.778116  384982 addons.go:70] Setting dashboard=true in profile "newest-cni-624263"
	I1205 07:07:08.778140  384982 addons.go:239] Setting addon dashboard=true in "newest-cni-624263"
	W1205 07:07:08.778150  384982 addons.go:248] addon dashboard should already be in state true
	I1205 07:07:08.778164  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:08.778186  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778150  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778139  384982 addons.go:70] Setting default-storageclass=true in profile "newest-cni-624263"
	I1205 07:07:08.778303  384982 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-624263"
	I1205 07:07:08.778585  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778752  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778783  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.779765  384982 out.go:179] * Verifying Kubernetes components...
	I1205 07:07:08.780933  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:08.804889  384982 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:07:08.804889  384982 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:07:08.806580  384982 addons.go:239] Setting addon default-storageclass=true in "newest-cni-624263"
	W1205 07:07:08.806597  384982 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:07:08.806617  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.806903  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.807441  384982 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.807461  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:07:08.807530  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.808424  384982 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:07:08.809309  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:07:08.809353  384982 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:07:08.809407  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.834751  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.836077  384982 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.836291  384982 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:07:08.837052  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.842660  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.859675  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.933525  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:08.947274  384982 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:07:08.947358  384982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:08.951314  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:07:08.951373  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:07:08.952715  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.960188  384982 api_server.go:72] duration metric: took 182.229824ms to wait for apiserver process to appear ...
	I1205 07:07:08.960210  384982 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:07:08.960226  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:08.965821  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:07:08.965841  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:07:08.967346  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.980049  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:07:08.980067  384982 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:07:08.994281  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:07:08.994299  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:07:09.008287  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:07:09.008306  384982 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:07:09.021481  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:07:09.021501  384982 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:07:09.034096  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:07:09.034115  384982 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:07:09.046446  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:07:09.046466  384982 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:07:09.058389  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:09.058405  384982 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:07:09.070248  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:10.183992  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 07:07:10.184023  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 07:07:10.184136  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.262013  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.262086  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.460707  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.465761  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.465796  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.858674166s)
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.8440466s)
	I1205 07:07:10.811561  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.741287368s)
	I1205 07:07:10.815716  384982 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-624263 addons enable metrics-server
	
	I1205 07:07:10.822997  384982 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1205 07:07:10.824128  384982 addons.go:530] duration metric: took 2.046144375s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:07:10.961075  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.965412  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.965439  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:11.461149  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:11.465102  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:07:11.466004  384982 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:07:11.466025  384982 api_server.go:131] duration metric: took 2.505809422s to wait for apiserver health ...
	I1205 07:07:11.466034  384982 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:07:11.469408  384982 system_pods.go:59] 8 kube-system pods found
	I1205 07:07:11.469441  384982 system_pods.go:61] "coredns-7d764666f9-jkmhj" [126785e3-c7a3-451f-ac72-e05d87bb32f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469449  384982 system_pods.go:61] "etcd-newest-cni-624263" [9a4fe128-6030-4681-b201-a2a13ac29474] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:07:11.469475  384982 system_pods.go:61] "kindnet-fctwl" [29a59939-b66c-4796-9a9e-e1b442bccf1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:07:11.469490  384982 system_pods.go:61] "kube-apiserver-newest-cni-624263" [2fc9852f-c8d5-41c2-8dbe-41056e227d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:07:11.469499  384982 system_pods.go:61] "kube-controller-manager-newest-cni-624263" [957b864f-8ee5-40ce-9e1f-4396041c4525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:07:11.469510  384982 system_pods.go:61] "kube-proxy-8v5qr" [59595bdd-49dc-4491-b494-1c48474ea8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:07:11.469520  384982 system_pods.go:61] "kube-scheduler-newest-cni-624263" [a3c04907-1ac1-43af-827b-b4ab46dd553c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:07:11.469533  384982 system_pods.go:61] "storage-provisioner" [1cfc97af-739e-4ee9-838a-75962c29bc63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469542  384982 system_pods.go:74] duration metric: took 3.503315ms to wait for pod list to return data ...
	I1205 07:07:11.469551  384982 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:07:11.471664  384982 default_sa.go:45] found service account: "default"
	I1205 07:07:11.471681  384982 default_sa.go:55] duration metric: took 2.121784ms for default service account to be created ...
	I1205 07:07:11.471691  384982 kubeadm.go:587] duration metric: took 2.693735692s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:11.471704  384982 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:07:11.473883  384982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:07:11.473903  384982 node_conditions.go:123] node cpu capacity is 8
	I1205 07:07:11.473915  384982 node_conditions.go:105] duration metric: took 2.207592ms to run NodePressure ...
	I1205 07:07:11.473924  384982 start.go:242] waiting for startup goroutines ...
	I1205 07:07:11.473931  384982 start.go:247] waiting for cluster config update ...
	I1205 07:07:11.473942  384982 start.go:256] writing updated cluster config ...
	I1205 07:07:11.474153  384982 ssh_runner.go:195] Run: rm -f paused
	I1205 07:07:11.522329  384982 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:07:11.524757  384982 out.go:179] * Done! kubectl is now configured to use "newest-cni-624263" cluster and "default" namespace by default
	W1205 07:07:06.706696  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:08.706849  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:10.705104  375543 pod_ready.go:94] pod "coredns-66bc5c9577-rg55r" is "Ready"
	I1205 07:07:10.705136  375543 pod_ready.go:86] duration metric: took 31.504740744s for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.707363  375543 pod_ready.go:83] waiting for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.711598  375543 pod_ready.go:94] pod "etcd-embed-certs-770390" is "Ready"
	I1205 07:07:10.711616  375543 pod_ready.go:86] duration metric: took 4.234427ms for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.713476  375543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.717163  375543 pod_ready.go:94] pod "kube-apiserver-embed-certs-770390" is "Ready"
	I1205 07:07:10.717181  375543 pod_ready.go:86] duration metric: took 3.676871ms for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.719115  375543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.903969  375543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-770390" is "Ready"
	I1205 07:07:10.903993  375543 pod_ready.go:86] duration metric: took 184.859493ms for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.104836  375543 pod_ready.go:83] waiting for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.504196  375543 pod_ready.go:94] pod "kube-proxy-7bjnn" is "Ready"
	I1205 07:07:11.504227  375543 pod_ready.go:86] duration metric: took 399.358917ms for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.703987  375543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103435  375543 pod_ready.go:94] pod "kube-scheduler-embed-certs-770390" is "Ready"
	I1205 07:07:12.103462  375543 pod_ready.go:86] duration metric: took 399.448083ms for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103479  375543 pod_ready.go:40] duration metric: took 32.906123608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:07:12.153648  375543 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:07:12.156415  375543 out.go:179] * Done! kubectl is now configured to use "embed-certs-770390" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.369705791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.372516097Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ab3d6aad-91f8-4320-b7fe-6263b7982596 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.373163996Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=352bedaf-2ced-4e95-91ab-82e20a884b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.373868319Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.374504174Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.374796908Z" level=info msg="Ran pod sandbox 3907434eb54becc5229939bd66f17481d6fe0dc1acad139365172c3c35f75bb7 with infra container: kube-system/kindnet-fctwl/POD" id=ab3d6aad-91f8-4320-b7fe-6263b7982596 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.375278921Z" level=info msg="Ran pod sandbox 8f67727f97493cd5f1f1132e5371fc981725a2110f2e2e0386530b77bf44559e with infra container: kube-system/kube-proxy-8v5qr/POD" id=352bedaf-2ced-4e95-91ab-82e20a884b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.375797349Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0d0498f1-e01b-4748-839c-dd0f804e9912 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.376115224Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1ecaebba-d66c-45a1-b32e-c05e66ea1a66 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.376677161Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c1c992f-ef5d-4cb5-9630-d2963888fc1e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377037561Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=889b9399-72dd-445d-8f42-932bde7cfcdb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377757848Z" level=info msg="Creating container: kube-system/kindnet-fctwl/kindnet-cni" id=60d05929-03e1-4bc3-99eb-5faa32cb5609 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377846902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.377930722Z" level=info msg="Creating container: kube-system/kube-proxy-8v5qr/kube-proxy" id=09c058a1-fdc4-4b93-a44c-f9e7c357a649 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.378063382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.381924377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.382497732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.384201699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.384683839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.409840313Z" level=info msg="Created container 8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf: kube-system/kindnet-fctwl/kindnet-cni" id=60d05929-03e1-4bc3-99eb-5faa32cb5609 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.410348584Z" level=info msg="Starting container: 8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf" id=e38ed321-f72a-4ecb-addb-edbdfccb7522 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.412086564Z" level=info msg="Started container" PID=1045 containerID=8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf description=kube-system/kindnet-fctwl/kindnet-cni id=e38ed321-f72a-4ecb-addb-edbdfccb7522 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3907434eb54becc5229939bd66f17481d6fe0dc1acad139365172c3c35f75bb7
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.414344948Z" level=info msg="Created container af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4: kube-system/kube-proxy-8v5qr/kube-proxy" id=09c058a1-fdc4-4b93-a44c-f9e7c357a649 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.414824568Z" level=info msg="Starting container: af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4" id=11c9f22b-2a23-432b-a3d6-55f9dacb25b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:11 newest-cni-624263 crio[525]: time="2025-12-05T07:07:11.417677733Z" level=info msg="Started container" PID=1046 containerID=af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4 description=kube-system/kube-proxy-8v5qr/kube-proxy id=11c9f22b-2a23-432b-a3d6-55f9dacb25b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f67727f97493cd5f1f1132e5371fc981725a2110f2e2e0386530b77bf44559e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	af4221aba90f3       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   8f67727f97493       kube-proxy-8v5qr                            kube-system
	8be704aa57ce4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   3907434eb54be       kindnet-fctwl                               kube-system
	d0abfce5c087b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   eca55194a02a1       etcd-newest-cni-624263                      kube-system
	b7dd1526bcbcd       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   990022e5d8b06       kube-apiserver-newest-cni-624263            kube-system
	ff2c7439c6494       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   2dffcf88ee1f7       kube-controller-manager-newest-cni-624263   kube-system
	5bbad9411c173       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   d12e7ea652633       kube-scheduler-newest-cni-624263            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-624263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-624263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=newest-cni-624263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_06_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:06:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-624263
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:07:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 05 Dec 2025 07:07:10 +0000   Fri, 05 Dec 2025 07:06:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-624263
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                74ead395-c6a4-4eb4-a8b4-1e768c64ff0f
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-624263                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-fctwl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-624263             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-624263    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-8v5qr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-624263             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-624263 event: Registered Node newest-cni-624263 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-624263 event: Registered Node newest-cni-624263 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48] <==
	{"level":"warn","ts":"2025-12-05T07:07:09.557514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.563625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.576598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.584248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.591221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.598200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.606086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.613535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.619914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.626530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.640443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.647537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.654013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.660022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.666753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.673759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.683009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.692305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.700151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.708140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.720158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.740403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.746734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.754948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:07:09.805599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:16 up  1:49,  0 user,  load average: 3.65, 3.35, 2.30
	Linux newest-cni-624263 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8be704aa57ce44faca387d9c6111943379608f6726a0b087bb438be2e0c766bf] <==
	I1205 07:07:11.606019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:07:11.698303       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1205 07:07:11.698456       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:07:11.698474       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:07:11.698499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:07:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:07:11.899011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:07:11.899084       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:07:11.899106       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:07:11.899266       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:07:12.299236       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:07:12.299406       1 metrics.go:72] Registering metrics
	I1205 07:07:12.299547       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0] <==
	I1205 07:07:10.276580       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:07:10.276587       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:07:10.276594       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:07:10.276812       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 07:07:10.276818       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:10.276843       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:07:10.276822       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:07:10.277069       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:07:10.277880       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:07:10.283400       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1205 07:07:10.292624       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 07:07:10.304238       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:07:10.327771       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:07:10.547689       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:07:10.579897       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:07:10.600705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:07:10.610733       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:07:10.620660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:07:10.658458       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.112.38"}
	I1205 07:07:10.669424       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.92.171"}
	I1205 07:07:11.179774       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1205 07:07:13.909546       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:07:14.009661       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:07:14.059830       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:07:14.111041       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f] <==
	I1205 07:07:13.411823       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411835       1 range_allocator.go:177] "Sending events to api server"
	I1205 07:07:13.411798       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411859       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1205 07:07:13.411865       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:13.411874       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411890       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411942       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.411969       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412014       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412025       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412097       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412256       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412269       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412545       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412711       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.412909       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.413092       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.413130       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.415314       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.419797       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:13.511399       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:13.511417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1205 07:07:13.511423       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1205 07:07:13.520061       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af4221aba90f31f48dfd2ce83495509a8af86cdf9b48991d525ab08466004fc4] <==
	I1205 07:07:11.450702       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:07:11.523058       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:11.623266       1 shared_informer.go:377] "Caches are synced"
	I1205 07:07:11.623343       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1205 07:07:11.623498       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:07:11.643042       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:07:11.643091       1 server_linux.go:136] "Using iptables Proxier"
	I1205 07:07:11.648007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:07:11.648419       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1205 07:07:11.648460       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:07:11.649877       1 config.go:200] "Starting service config controller"
	I1205 07:07:11.649903       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:07:11.649905       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:07:11.649920       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:07:11.649941       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:07:11.649955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:07:11.650048       1 config.go:309] "Starting node config controller"
	I1205 07:07:11.650106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:07:11.650119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:07:11.749997       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:07:11.750026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:07:11.750065       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c] <==
	I1205 07:07:08.765195       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:07:10.204204       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:07:10.204259       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:07:10.204280       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:07:10.204289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:07:10.257218       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1205 07:07:10.257299       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:07:10.260081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:07:10.260115       1 shared_informer.go:370] "Waiting for caches to sync"
	I1205 07:07:10.260220       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:07:10.260405       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:07:10.360462       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.292737     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.301235     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-624263\" already exists" pod="kube-system/kube-apiserver-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.301269     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304374     666 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304470     666 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.304506     666 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.305288     666 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.307692     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-624263\" already exists" pod="kube-system/kube-controller-manager-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: I1205 07:07:10.307724     666 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-624263"
	Dec 05 07:07:10 newest-cni-624263 kubelet[666]: E1205 07:07:10.316011     666 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-624263\" already exists" pod="kube-system/kube-scheduler-newest-cni-624263"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.060622     666 apiserver.go:52] "Watching apiserver"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.065561     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-624263" containerName="kube-controller-manager"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.068075     666 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106025     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-624263" containerName="kube-apiserver"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106136     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-624263" containerName="etcd"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: E1205 07:07:11.106401     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-624263" containerName="kube-scheduler"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122517     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-lib-modules\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122564     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-cni-cfg\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122588     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-lib-modules\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122632     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59595bdd-49dc-4491-b494-1c48474ea8c4-xtables-lock\") pod \"kube-proxy-8v5qr\" (UID: \"59595bdd-49dc-4491-b494-1c48474ea8c4\") " pod="kube-system/kube-proxy-8v5qr"
	Dec 05 07:07:11 newest-cni-624263 kubelet[666]: I1205 07:07:11.122670     666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29a59939-b66c-4796-9a9e-e1b442bccf1f-xtables-lock\") pod \"kindnet-fctwl\" (UID: \"29a59939-b66c-4796-9a9e-e1b442bccf1f\") " pod="kube-system/kindnet-fctwl"
	Dec 05 07:07:12 newest-cni-624263 kubelet[666]: E1205 07:07:12.217880     666 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-624263" containerName="etcd"
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:12 newest-cni-624263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-624263 -n newest-cni-624263
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-624263 -n newest-cni-624263: exit status 2 (312.343352ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-624263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph: exit status 1 (57.073119ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jkmhj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-fzkdj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-h2xph" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-624263 describe pod coredns-7d764666f9-jkmhj storage-provisioner dashboard-metrics-scraper-867fb5f87b-fzkdj kubernetes-dashboard-b84665fb8-h2xph: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-770390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-770390 --alsologtostderr -v=1: exit status 80 (1.584568826s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-770390 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:07:23.869095  391331 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:23.869180  391331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:23.869188  391331 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:23.869192  391331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:23.869441  391331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:23.869663  391331 out.go:368] Setting JSON to false
	I1205 07:07:23.869680  391331 mustload.go:66] Loading cluster: embed-certs-770390
	I1205 07:07:23.870006  391331 config.go:182] Loaded profile config "embed-certs-770390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:07:23.870345  391331 cli_runner.go:164] Run: docker container inspect embed-certs-770390 --format={{.State.Status}}
	I1205 07:07:23.887397  391331 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:07:23.887611  391331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:23.942216  391331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-05 07:07:23.932540926 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:23.943038  391331 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-770390 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:07:23.944929  391331 out.go:179] * Pausing node embed-certs-770390 ... 
	I1205 07:07:23.946244  391331 host.go:66] Checking if "embed-certs-770390" exists ...
	I1205 07:07:23.946520  391331 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:23.946572  391331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-770390
	I1205 07:07:23.963170  391331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/embed-certs-770390/id_rsa Username:docker}
	I1205 07:07:24.058419  391331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:24.069932  391331 pause.go:52] kubelet running: true
	I1205 07:07:24.069985  391331 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:24.224920  391331 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:24.224993  391331 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:24.286302  391331 cri.go:89] found id: "1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088"
	I1205 07:07:24.286341  391331 cri.go:89] found id: "df691a881bd8857e9f27b30400e75e80f5c1dd193eeaa849cf64bcb156b4f2bc"
	I1205 07:07:24.286349  391331 cri.go:89] found id: "688c23ae1eefd91ac5bf2ce60c2ea6c1c9f585b311b36fd061bffce62338bb1c"
	I1205 07:07:24.286354  391331 cri.go:89] found id: "ee851fb4ae660958b7ef530ba88b955a76f13d0142203ad5c0fc539d6d40c0d8"
	I1205 07:07:24.286357  391331 cri.go:89] found id: "6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b"
	I1205 07:07:24.286361  391331 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:07:24.286364  391331 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:07:24.286366  391331 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:07:24.286369  391331 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:07:24.286383  391331 cri.go:89] found id: "9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	I1205 07:07:24.286390  391331 cri.go:89] found id: "7a3eada6f877e1286c7e6a656066b8252366921900d5eaa0ad8a32a8ddfb215e"
	I1205 07:07:24.286393  391331 cri.go:89] found id: ""
	I1205 07:07:24.286428  391331 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:24.297799  391331 retry.go:31] will retry after 145.72412ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:24Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:24.444222  391331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:24.456588  391331 pause.go:52] kubelet running: false
	I1205 07:07:24.456648  391331 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:24.591431  391331 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:24.591518  391331 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:24.654028  391331 cri.go:89] found id: "1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088"
	I1205 07:07:24.654047  391331 cri.go:89] found id: "df691a881bd8857e9f27b30400e75e80f5c1dd193eeaa849cf64bcb156b4f2bc"
	I1205 07:07:24.654051  391331 cri.go:89] found id: "688c23ae1eefd91ac5bf2ce60c2ea6c1c9f585b311b36fd061bffce62338bb1c"
	I1205 07:07:24.654054  391331 cri.go:89] found id: "ee851fb4ae660958b7ef530ba88b955a76f13d0142203ad5c0fc539d6d40c0d8"
	I1205 07:07:24.654057  391331 cri.go:89] found id: "6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b"
	I1205 07:07:24.654061  391331 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:07:24.654064  391331 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:07:24.654067  391331 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:07:24.654070  391331 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:07:24.654086  391331 cri.go:89] found id: "9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	I1205 07:07:24.654095  391331 cri.go:89] found id: "7a3eada6f877e1286c7e6a656066b8252366921900d5eaa0ad8a32a8ddfb215e"
	I1205 07:07:24.654099  391331 cri.go:89] found id: ""
	I1205 07:07:24.654141  391331 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:24.664947  391331 retry.go:31] will retry after 504.638872ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:24Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:25.170806  391331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:25.182998  391331 pause.go:52] kubelet running: false
	I1205 07:07:25.183046  391331 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:07:25.316656  391331 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:07:25.316736  391331 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:07:25.377505  391331 cri.go:89] found id: "1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088"
	I1205 07:07:25.377523  391331 cri.go:89] found id: "df691a881bd8857e9f27b30400e75e80f5c1dd193eeaa849cf64bcb156b4f2bc"
	I1205 07:07:25.377527  391331 cri.go:89] found id: "688c23ae1eefd91ac5bf2ce60c2ea6c1c9f585b311b36fd061bffce62338bb1c"
	I1205 07:07:25.377530  391331 cri.go:89] found id: "ee851fb4ae660958b7ef530ba88b955a76f13d0142203ad5c0fc539d6d40c0d8"
	I1205 07:07:25.377533  391331 cri.go:89] found id: "6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b"
	I1205 07:07:25.377537  391331 cri.go:89] found id: "2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2"
	I1205 07:07:25.377540  391331 cri.go:89] found id: "4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa"
	I1205 07:07:25.377542  391331 cri.go:89] found id: "923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb"
	I1205 07:07:25.377545  391331 cri.go:89] found id: "ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f"
	I1205 07:07:25.377567  391331 cri.go:89] found id: "9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	I1205 07:07:25.377573  391331 cri.go:89] found id: "7a3eada6f877e1286c7e6a656066b8252366921900d5eaa0ad8a32a8ddfb215e"
	I1205 07:07:25.377575  391331 cri.go:89] found id: ""
	I1205 07:07:25.377618  391331 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:07:25.390223  391331 out.go:203] 
	W1205 07:07:25.391514  391331 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:07:25.391541  391331 out.go:285] * 
	* 
	W1205 07:07:25.395708  391331 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:07:25.397448  391331 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-770390 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-770390
helpers_test.go:243: (dbg) docker inspect embed-certs-770390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	        "Created": "2025-12-05T07:04:47.935376196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 375842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:06:26.832137791Z",
	            "FinishedAt": "2025-12-05T07:06:25.952595519Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hostname",
	        "HostsPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hosts",
	        "LogPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15-json.log",
	        "Name": "/embed-certs-770390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-770390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-770390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	                "LowerDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-770390",
	                "Source": "/var/lib/docker/volumes/embed-certs-770390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-770390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-770390",
	                "name.minikube.sigs.k8s.io": "embed-certs-770390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6ceedc51fff3b7c6cac40b22a355481dbcbd397954c5ec86671641d1d0faa2a7",
	            "SandboxKey": "/var/run/docker/netns/6ceedc51fff3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-770390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "931902d22986d998cad8286fbe16fdac2b5321eb6ca6ce1a3581e586ebb4b1ac",
	                    "EndpointID": "25a823ef62aebaacb31c310d5b612601cc9b7a981bbb2d0235fbdafb87a78f35",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:8d:51:2d:e5:c8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-770390",
	                        "efaf2da28c0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390: exit status 2 (307.970579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-770390 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ newest-cni-624263 image list --format=json                                                                                                                                                                                                           │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p newest-cni-624263                                                                                                                                                                                                                                 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p newest-cni-624263                                                                                                                                                                                                                                 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ embed-certs-770390 image list --format=json                                                                                                                                                                                                          │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p embed-certs-770390 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:07:01.656485  384982 out.go:252] * Restarting existing docker container for "newest-cni-624263" ...
	I1205 07:07:01.656540  384982 cli_runner.go:164] Run: docker start newest-cni-624263
	I1205 07:07:01.895199  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.914785  384982 kic.go:430] container "newest-cni-624263" state is running.
	I1205 07:07:01.915225  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:01.934239  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.934479  384982 machine.go:94] provisionDockerMachine start ...
	I1205 07:07:01.934568  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:01.952380  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:01.952665  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:01.952679  384982 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:07:01.953292  384982 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55518->127.0.0.1:33138: read: connection reset by peer
	I1205 07:07:05.092419  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.092445  384982 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:07:05.092491  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.112429  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.112718  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.112739  384982 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:07:05.265486  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.265582  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.285453  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.285689  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.285716  384982 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:07:05.425411  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:07:05.425436  384982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:07:05.425464  384982 ubuntu.go:190] setting up certificates
	I1205 07:07:05.425475  384982 provision.go:84] configureAuth start
	I1205 07:07:05.425532  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:05.443549  384982 provision.go:143] copyHostCerts
	I1205 07:07:05.443614  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:07:05.443629  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:07:05.443700  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:07:05.443800  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:07:05.443816  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:07:05.443845  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:07:05.443904  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:07:05.443915  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:07:05.443950  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:07:05.444023  384982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:07:05.672635  384982 provision.go:177] copyRemoteCerts
	I1205 07:07:05.672684  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:07:05.672730  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.690043  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:05.792000  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:07:05.810085  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:07:05.827489  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:07:05.844988  384982 provision.go:87] duration metric: took 419.49922ms to configureAuth
	I1205 07:07:05.845013  384982 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:07:05.845213  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:05.845355  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.868784  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.868985  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.869010  384982 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:07:06.168481  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:07:06.168508  384982 machine.go:97] duration metric: took 4.234011493s to provisionDockerMachine
	I1205 07:07:06.168521  384982 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:07:06.168536  384982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:07:06.168593  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:07:06.168662  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.188502  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	W1205 07:07:02.207380  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:04.704952  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:06.292387  384982 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:07:06.295922  384982 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:07:06.295950  384982 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:07:06.295961  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:07:06.296006  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:07:06.296104  384982 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:07:06.296231  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:07:06.303904  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:06.321264  384982 start.go:296] duration metric: took 152.731097ms for postStartSetup
	I1205 07:07:06.321343  384982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:06.321386  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.342624  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.439978  384982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:07:06.444248  384982 fix.go:56] duration metric: took 4.8080316s for fixHost
	I1205 07:07:06.444268  384982 start.go:83] releasing machines lock for "newest-cni-624263", held for 4.808068962s
	I1205 07:07:06.444356  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:06.461188  384982 ssh_runner.go:195] Run: cat /version.json
	I1205 07:07:06.461224  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.461315  384982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:07:06.461389  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.479772  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.480279  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.758196  384982 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:06.764592  384982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:07:06.798459  384982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:07:06.802811  384982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:07:06.802860  384982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:07:06.810439  384982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:07:06.810458  384982 start.go:496] detecting cgroup driver to use...
	I1205 07:07:06.810483  384982 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:07:06.810515  384982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:07:06.823596  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:07:06.835347  384982 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:07:06.835386  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:07:06.849102  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:07:06.861013  384982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:07:06.946233  384982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:07:07.034814  384982 docker.go:234] disabling docker service ...
	I1205 07:07:07.034859  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:07:07.048490  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:07:07.062338  384982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:07:07.152172  384982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:07:07.242359  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:07:07.254816  384982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:07:07.268657  384982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:07:07.268723  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.277649  384982 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:07:07.277721  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.287203  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.296720  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.305673  384982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:07:07.314603  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.323209  384982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.331118  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.339939  384982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:07:07.346935  384982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:07:07.354783  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.445879  384982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:07:07.588541  384982 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:07:07.588604  384982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:07:07.594687  384982 start.go:564] Will wait 60s for crictl version
	I1205 07:07:07.595153  384982 ssh_runner.go:195] Run: which crictl
	I1205 07:07:07.598691  384982 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:07:07.626384  384982 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:07:07.626465  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.656627  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.691598  384982 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:07:07.692738  384982 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:07:07.715101  384982 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:07:07.719286  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.731914  384982 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:07:07.733217  384982 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:07:07.733394  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:07.733451  384982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:07:07.764980  384982 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:07:07.765003  384982 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:07:07.765012  384982 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:07:07.765132  384982 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:07:07.765207  384982 ssh_runner.go:195] Run: crio config
	I1205 07:07:07.812534  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:07.812555  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:07.812573  384982 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:07:07.812604  384982 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:07:07.812765  384982 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:07:07.812831  384982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:07:07.820594  384982 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:07:07.820653  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:07:07.828109  384982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:07:07.840571  384982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:07:07.852346  384982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:07:07.864062  384982 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:07:07.867420  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.876647  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.969578  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:07.991685  384982 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:07:07.991713  384982 certs.go:195] generating shared ca certs ...
	I1205 07:07:07.991735  384982 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:07.991888  384982 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:07:07.991947  384982 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:07:07.991961  384982 certs.go:257] generating profile certs ...
	I1205 07:07:07.992079  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:07:07.992226  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:07:07.992293  384982 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:07:07.992512  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:07:07.992567  384982 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:07:07.992584  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:07:07.992622  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:07:07.992661  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:07:07.992697  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:07:07.992768  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:07.993641  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:07:08.013632  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:07:08.033788  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:07:08.054106  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:07:08.078883  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:07:08.099768  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:07:08.116845  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:07:08.135382  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:07:08.152628  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:07:08.169338  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:07:08.186981  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:07:08.206005  384982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:07:08.218973  384982 ssh_runner.go:195] Run: openssl version
	I1205 07:07:08.224889  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.231834  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:07:08.238627  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242398  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242447  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.277264  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:07:08.284110  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.290922  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:07:08.298213  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301760  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301803  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.338438  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:07:08.345749  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.353668  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:07:08.361252  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364769  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364816  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.405377  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:07:08.413075  384982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:07:08.416868  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:07:08.453487  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:07:08.487644  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:07:08.533187  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:07:08.593546  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:07:08.653721  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:07:08.709159  384982 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:08.709282  384982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:07:08.709349  384982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:07:08.737962  384982 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:08.737982  384982 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:08.737987  384982 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:08.737992  384982 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:08.737996  384982 cri.go:89] found id: ""
	I1205 07:07:08.738037  384982 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:07:08.749927  384982 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:08.750001  384982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:07:08.757435  384982 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:07:08.757451  384982 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:07:08.757493  384982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:07:08.764462  384982 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:07:08.765259  384982 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-624263" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.765847  384982 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-624263" cluster setting kubeconfig missing "newest-cni-624263" context setting]
	I1205 07:07:08.766845  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.768427  384982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:07:08.775598  384982 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:07:08.775623  384982 kubeadm.go:602] duration metric: took 18.165924ms to restartPrimaryControlPlane
	I1205 07:07:08.775632  384982 kubeadm.go:403] duration metric: took 66.480576ms to StartCluster
	I1205 07:07:08.775648  384982 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.775713  384982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.777693  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.777931  384982 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:07:08.777993  384982 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:07:08.778091  384982 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-624263"
	I1205 07:07:08.778111  384982 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-624263"
	W1205 07:07:08.778120  384982 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:07:08.778116  384982 addons.go:70] Setting dashboard=true in profile "newest-cni-624263"
	I1205 07:07:08.778140  384982 addons.go:239] Setting addon dashboard=true in "newest-cni-624263"
	W1205 07:07:08.778150  384982 addons.go:248] addon dashboard should already be in state true
	I1205 07:07:08.778164  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:08.778186  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778150  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778139  384982 addons.go:70] Setting default-storageclass=true in profile "newest-cni-624263"
	I1205 07:07:08.778303  384982 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-624263"
	I1205 07:07:08.778585  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778752  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778783  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.779765  384982 out.go:179] * Verifying Kubernetes components...
	I1205 07:07:08.780933  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:08.804889  384982 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:07:08.804889  384982 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:07:08.806580  384982 addons.go:239] Setting addon default-storageclass=true in "newest-cni-624263"
	W1205 07:07:08.806597  384982 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:07:08.806617  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.806903  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.807441  384982 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.807461  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:07:08.807530  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.808424  384982 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:07:08.809309  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:07:08.809353  384982 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:07:08.809407  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.834751  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.836077  384982 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.836291  384982 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:07:08.837052  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.842660  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.859675  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.933525  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:08.947274  384982 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:07:08.947358  384982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:08.951314  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:07:08.951373  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:07:08.952715  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.960188  384982 api_server.go:72] duration metric: took 182.229824ms to wait for apiserver process to appear ...
	I1205 07:07:08.960210  384982 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:07:08.960226  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:08.965821  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:07:08.965841  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:07:08.967346  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.980049  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:07:08.980067  384982 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:07:08.994281  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:07:08.994299  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:07:09.008287  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:07:09.008306  384982 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:07:09.021481  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:07:09.021501  384982 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:07:09.034096  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:07:09.034115  384982 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:07:09.046446  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:07:09.046466  384982 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:07:09.058389  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:09.058405  384982 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:07:09.070248  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:10.183992  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 07:07:10.184023  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 07:07:10.184136  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.262013  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.262086  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.460707  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.465761  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.465796  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.858674166s)
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.8440466s)
	I1205 07:07:10.811561  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.741287368s)
	I1205 07:07:10.815716  384982 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-624263 addons enable metrics-server
	
	I1205 07:07:10.822997  384982 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1205 07:07:10.824128  384982 addons.go:530] duration metric: took 2.046144375s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:07:10.961075  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.965412  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.965439  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:11.461149  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:11.465102  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:07:11.466004  384982 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:07:11.466025  384982 api_server.go:131] duration metric: took 2.505809422s to wait for apiserver health ...
	I1205 07:07:11.466034  384982 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:07:11.469408  384982 system_pods.go:59] 8 kube-system pods found
	I1205 07:07:11.469441  384982 system_pods.go:61] "coredns-7d764666f9-jkmhj" [126785e3-c7a3-451f-ac72-e05d87bb32f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469449  384982 system_pods.go:61] "etcd-newest-cni-624263" [9a4fe128-6030-4681-b201-a2a13ac29474] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:07:11.469475  384982 system_pods.go:61] "kindnet-fctwl" [29a59939-b66c-4796-9a9e-e1b442bccf1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:07:11.469490  384982 system_pods.go:61] "kube-apiserver-newest-cni-624263" [2fc9852f-c8d5-41c2-8dbe-41056e227d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:07:11.469499  384982 system_pods.go:61] "kube-controller-manager-newest-cni-624263" [957b864f-8ee5-40ce-9e1f-4396041c4525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:07:11.469510  384982 system_pods.go:61] "kube-proxy-8v5qr" [59595bdd-49dc-4491-b494-1c48474ea8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:07:11.469520  384982 system_pods.go:61] "kube-scheduler-newest-cni-624263" [a3c04907-1ac1-43af-827b-b4ab46dd553c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:07:11.469533  384982 system_pods.go:61] "storage-provisioner" [1cfc97af-739e-4ee9-838a-75962c29bc63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469542  384982 system_pods.go:74] duration metric: took 3.503315ms to wait for pod list to return data ...
	I1205 07:07:11.469551  384982 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:07:11.471664  384982 default_sa.go:45] found service account: "default"
	I1205 07:07:11.471681  384982 default_sa.go:55] duration metric: took 2.121784ms for default service account to be created ...
	I1205 07:07:11.471691  384982 kubeadm.go:587] duration metric: took 2.693735692s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:11.471704  384982 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:07:11.473883  384982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:07:11.473903  384982 node_conditions.go:123] node cpu capacity is 8
	I1205 07:07:11.473915  384982 node_conditions.go:105] duration metric: took 2.207592ms to run NodePressure ...
	I1205 07:07:11.473924  384982 start.go:242] waiting for startup goroutines ...
	I1205 07:07:11.473931  384982 start.go:247] waiting for cluster config update ...
	I1205 07:07:11.473942  384982 start.go:256] writing updated cluster config ...
	I1205 07:07:11.474153  384982 ssh_runner.go:195] Run: rm -f paused
	I1205 07:07:11.522329  384982 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:07:11.524757  384982 out.go:179] * Done! kubectl is now configured to use "newest-cni-624263" cluster and "default" namespace by default
	W1205 07:07:06.706696  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:08.706849  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:10.705104  375543 pod_ready.go:94] pod "coredns-66bc5c9577-rg55r" is "Ready"
	I1205 07:07:10.705136  375543 pod_ready.go:86] duration metric: took 31.504740744s for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.707363  375543 pod_ready.go:83] waiting for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.711598  375543 pod_ready.go:94] pod "etcd-embed-certs-770390" is "Ready"
	I1205 07:07:10.711616  375543 pod_ready.go:86] duration metric: took 4.234427ms for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.713476  375543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.717163  375543 pod_ready.go:94] pod "kube-apiserver-embed-certs-770390" is "Ready"
	I1205 07:07:10.717181  375543 pod_ready.go:86] duration metric: took 3.676871ms for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.719115  375543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.903969  375543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-770390" is "Ready"
	I1205 07:07:10.903993  375543 pod_ready.go:86] duration metric: took 184.859493ms for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.104836  375543 pod_ready.go:83] waiting for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.504196  375543 pod_ready.go:94] pod "kube-proxy-7bjnn" is "Ready"
	I1205 07:07:11.504227  375543 pod_ready.go:86] duration metric: took 399.358917ms for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.703987  375543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103435  375543 pod_ready.go:94] pod "kube-scheduler-embed-certs-770390" is "Ready"
	I1205 07:07:12.103462  375543 pod_ready.go:86] duration metric: took 399.448083ms for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103479  375543 pod_ready.go:40] duration metric: took 32.906123608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:07:12.153648  375543 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:07:12.156415  375543 out.go:179] * Done! kubectl is now configured to use "embed-certs-770390" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.312472731Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.620296249Z" level=info msg="Removing container: db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb" id=c12490fe-4c60-4680-9206-e860ef62215a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.629083593Z" level=info msg="Removed container db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=c12490fe-4c60-4680-9206-e860ef62215a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.534455851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=052bf5da-264e-42e7-96ef-8475e2511678 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.535297088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=688559f8-07f9-4f75-be45-735187bb5298 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.536249603Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=6b84157f-3f49-44b6-af86-bca1eb109282 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.536404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.542195335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.542763022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.572248351Z" level=info msg="Created container 9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=6b84157f-3f49-44b6-af86-bca1eb109282 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.572938735Z" level=info msg="Starting container: 9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019" id=ffdbf77d-e7c7-4e84-a357-6682fed5d3b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.574821511Z" level=info msg="Started container" PID=1770 containerID=9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper id=ffdbf77d-e7c7-4e84-a357-6682fed5d3b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cf77530f15cfe0aec2b806ebcba4885341957f6733ea5c37d1d0a62ad7664c2
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.659100895Z" level=info msg="Removing container: 3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5" id=cc692de6-d2b7-48bf-9858-16d981713232 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.668238707Z" level=info msg="Removed container 3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=cc692de6-d2b7-48bf-9858-16d981713232 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.67614833Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1f5fa27-e31f-4ff5-988f-36e6825b9a0c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.677210361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e1b0b009-4d59-4319-8ce4-5883717e2b00 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.678299148Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=92a29cf3-c6e9-4a82-84a3-6dbecff38520 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.678457865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.684914633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.685099769Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5318a04f807393c71cb682803983451dfdd1516c94174b2b31918b49a6003444/merged/etc/passwd: no such file or directory"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.685132975Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5318a04f807393c71cb682803983451dfdd1516c94174b2b31918b49a6003444/merged/etc/group: no such file or directory"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.686146409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.712385755Z" level=info msg="Created container 1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088: kube-system/storage-provisioner/storage-provisioner" id=92a29cf3-c6e9-4a82-84a3-6dbecff38520 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.713117951Z" level=info msg="Starting container: 1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088" id=e73ac87d-c594-41d0-973a-eade688d2fa6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.715695817Z" level=info msg="Started container" PID=1784 containerID=1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088 description=kube-system/storage-provisioner/storage-provisioner id=e73ac87d-c594-41d0-973a-eade688d2fa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52ddfd3bef236bb4d590b8ae271cfd0265c1a67ba07636fa86992a41b62dc6d0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1aa7cd837236b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   52ddfd3bef236       storage-provisioner                          kube-system
	9392561830b7e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   8cf77530f15cf       dashboard-metrics-scraper-6ffb444bf9-jp5dn   kubernetes-dashboard
	7a3eada6f877e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   96f8238f33446       kubernetes-dashboard-855c9754f9-2kzfd        kubernetes-dashboard
	df691a881bd88       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   024bdc6d12081       coredns-66bc5c9577-rg55r                     kube-system
	6b44f3ce66c53       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   d8a3231ca816b       busybox                                      default
	688c23ae1eefd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   fedd58577705d       kindnet-dmpt2                                kube-system
	ee851fb4ae660       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           47 seconds ago      Running             kube-proxy                  0                   6c71597e06f39       kube-proxy-7bjnn                             kube-system
	6177c64055ee5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   52ddfd3bef236       storage-provisioner                          kube-system
	2e99e708af8cd       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           51 seconds ago      Running             etcd                        0                   c21194c4aff04       etcd-embed-certs-770390                      kube-system
	4d4e5c825a7de       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           51 seconds ago      Running             kube-controller-manager     0                   5eb9be070d018       kube-controller-manager-embed-certs-770390   kube-system
	923febfdc8bcc       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           51 seconds ago      Running             kube-apiserver              0                   1375fa901891d       kube-apiserver-embed-certs-770390            kube-system
	ae1745cf83f11       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           51 seconds ago      Running             kube-scheduler              0                   5a6bad199c30d       kube-scheduler-embed-certs-770390            kube-system
	
	
	==> coredns [df691a881bd8857e9f27b30400e75e80f5c1dd193eeaa849cf64bcb156b4f2bc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49584 - 63854 "HINFO IN 123180335028135115.6838869531824761202. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.109626967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-770390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-770390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=embed-certs-770390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-770390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:07:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-770390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                6db5accb-9611-4107-b9f0-962216d17807
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-rg55r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m13s
	  kube-system                 etcd-embed-certs-770390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-dmpt2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-embed-certs-770390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-embed-certs-770390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-7bjnn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-embed-certs-770390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jp5dn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2kzfd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m11s              kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m19s              kubelet          Node embed-certs-770390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s              kubelet          Node embed-certs-770390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s              kubelet          Node embed-certs-770390 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m19s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m14s              node-controller  Node embed-certs-770390 event: Registered Node embed-certs-770390 in Controller
	  Normal  NodeReady                92s                kubelet          Node embed-certs-770390 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node embed-certs-770390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node embed-certs-770390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node embed-certs-770390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-770390 event: Registered Node embed-certs-770390 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2] <==
	{"level":"warn","ts":"2025-12-05T07:06:36.897347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.910034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.916525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.923107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.929735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.936768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.942993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.951966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.959384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.966392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.976463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.982518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.989986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.997315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.003507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.010975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.017438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.031702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.039259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.047008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.053526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.066961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.074539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.081921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.134965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40868","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:26 up  1:49,  0 user,  load average: 3.09, 3.24, 2.27
	Linux embed-certs-770390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [688c23ae1eefd91ac5bf2ce60c2ea6c1c9f585b311b36fd061bffce62338bb1c] <==
	I1205 07:06:39.179176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:06:39.179502       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1205 07:06:39.179687       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:06:39.179710       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:06:39.179739       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:06:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:06:39.287817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:06:39.287873       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:06:39.287892       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:06:39.288011       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:06:39.756694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:06:39.756768       1 metrics.go:72] Registering metrics
	I1205 07:06:39.756888       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:49.288496       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:06:49.288557       1 main.go:301] handling current node
	I1205 07:06:59.292168       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:06:59.292200       1 main.go:301] handling current node
	I1205 07:07:09.288487       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:07:09.288531       1 main.go:301] handling current node
	I1205 07:07:19.287851       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:07:19.287881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb] <==
	I1205 07:06:37.623984       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:06:37.624243       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1205 07:06:37.624255       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 07:06:37.624405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:06:37.624454       1 aggregator.go:171] initial CRD sync complete...
	I1205 07:06:37.624465       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:06:37.624470       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:06:37.624476       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:06:37.624927       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:06:37.625019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:06:37.643184       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:06:37.652878       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:37.658199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:06:37.706888       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:06:37.928962       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:06:37.960631       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:06:37.979593       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:06:37.986254       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:06:37.993847       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:06:38.026155       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.156.217"}
	I1205 07:06:38.035956       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.78.172"}
	I1205 07:06:38.527181       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:06:40.951044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:41.301299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:06:41.450625       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa] <==
	I1205 07:06:40.936988       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:06:40.947369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 07:06:40.947384       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 07:06:40.947419       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 07:06:40.947497       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 07:06:40.947549       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:06:40.948638       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 07:06:40.948681       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 07:06:40.948683       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 07:06:40.948798       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:06:40.948892       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:06:40.948914       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:06:40.948971       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:06:40.949057       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-770390"
	I1205 07:06:40.949100       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:06:40.952624       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:06:40.952626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:40.956901       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:06:40.959080       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1205 07:06:40.959176       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1205 07:06:40.960281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1205 07:06:40.960317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1205 07:06:40.962519       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 07:06:40.963680       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1205 07:06:40.965946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ee851fb4ae660958b7ef530ba88b955a76f13d0142203ad5c0fc539d6d40c0d8] <==
	I1205 07:06:38.951303       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:06:39.021598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:06:39.122480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:06:39.122554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1205 07:06:39.122664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:06:39.141774       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:06:39.141839       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:06:39.147984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:06:39.148373       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:06:39.148407       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:39.150020       1 config.go:309] "Starting node config controller"
	I1205 07:06:39.150037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:06:39.150110       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:06:39.150134       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:06:39.150168       1 config.go:200] "Starting service config controller"
	I1205 07:06:39.150177       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:06:39.150188       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:06:39.150199       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:06:39.250206       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:06:39.250248       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:06:39.250218       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:06:39.250242       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f] <==
	I1205 07:06:35.378914       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:06:37.595380       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:06:37.595421       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:06:37.595520       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:06:37.595530       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:06:37.621580       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:06:37.621669       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:37.624779       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:06:37.624914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:37.624934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:37.624953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:06:37.725928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574678     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7fc53b6c-2249-43c2-9989-72cc5652b20b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2kzfd\" (UID: \"7fc53b6c-2249-43c2-9989-72cc5652b20b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574730     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpphw\" (UniqueName: \"kubernetes.io/projected/7fc53b6c-2249-43c2-9989-72cc5652b20b-kube-api-access-xpphw\") pod \"kubernetes-dashboard-855c9754f9-2kzfd\" (UID: \"7fc53b6c-2249-43c2-9989-72cc5652b20b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574762     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxj6\" (UniqueName: \"kubernetes.io/projected/8bd2761b-8c0a-4674-a8d4-9f688fdcfb79-kube-api-access-bnxj6\") pod \"dashboard-metrics-scraper-6ffb444bf9-jp5dn\" (UID: \"8bd2761b-8c0a-4674-a8d4-9f688fdcfb79\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574787     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8bd2761b-8c0a-4674-a8d4-9f688fdcfb79-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jp5dn\" (UID: \"8bd2761b-8c0a-4674-a8d4-9f688fdcfb79\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn"
	Dec 05 07:06:45 embed-certs-770390 kubelet[727]: I1205 07:06:45.624670     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd" podStartSLOduration=1.220671341 podStartE2EDuration="4.62464589s" podCreationTimestamp="2025-12-05 07:06:41 +0000 UTC" firstStartedPulling="2025-12-05 07:06:41.843989784 +0000 UTC m=+7.422301541" lastFinishedPulling="2025-12-05 07:06:45.247964341 +0000 UTC m=+10.826276090" observedRunningTime="2025-12-05 07:06:45.623994333 +0000 UTC m=+11.202306106" watchObservedRunningTime="2025-12-05 07:06:45.62464589 +0000 UTC m=+11.202957651"
	Dec 05 07:06:48 embed-certs-770390 kubelet[727]: I1205 07:06:48.612137     727 scope.go:117] "RemoveContainer" containerID="db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: I1205 07:06:49.617189     727 scope.go:117] "RemoveContainer" containerID="db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: I1205 07:06:49.617389     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: E1205 07:06:49.617633     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:06:50 embed-certs-770390 kubelet[727]: I1205 07:06:50.621542     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:50 embed-certs-770390 kubelet[727]: E1205 07:06:50.621743     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:06:52 embed-certs-770390 kubelet[727]: I1205 07:06:52.050123     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:52 embed-certs-770390 kubelet[727]: E1205 07:06:52.050414     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.534044     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.657841     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.658030     727 scope.go:117] "RemoveContainer" containerID="9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: E1205 07:07:04.658216     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:09 embed-certs-770390 kubelet[727]: I1205 07:07:09.675671     727 scope.go:117] "RemoveContainer" containerID="6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b"
	Dec 05 07:07:12 embed-certs-770390 kubelet[727]: I1205 07:07:12.050063     727 scope.go:117] "RemoveContainer" containerID="9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	Dec 05 07:07:12 embed-certs-770390 kubelet[727]: E1205 07:07:12.050281     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:24 embed-certs-770390 kubelet[727]: I1205 07:07:24.202475     727 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: kubelet.service: Consumed 1.529s CPU time.
	
	
	==> kubernetes-dashboard [7a3eada6f877e1286c7e6a656066b8252366921900d5eaa0ad8a32a8ddfb215e] <==
	2025/12/05 07:06:45 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:45 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:45 Using secret token for csrf signing
	2025/12/05 07:06:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/05 07:06:45 Generating JWE encryption key
	2025/12/05 07:06:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:45 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:45 Creating in-cluster Sidecar client
	2025/12/05 07:06:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:45 Serving insecurely on HTTP port: 9090
	2025/12/05 07:07:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:45 Starting overwatch
	
	
	==> storage-provisioner [1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088] <==
	I1205 07:07:09.729519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:07:09.738350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:07:09.738389       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:07:09.740801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:13.195497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:17.455408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:21.053171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:24.109839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b] <==
	I1205 07:06:38.910655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:07:08.913285       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-770390 -n embed-certs-770390: exit status 2 (316.47873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-770390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-770390
helpers_test.go:243: (dbg) docker inspect embed-certs-770390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	        "Created": "2025-12-05T07:04:47.935376196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 375842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:06:26.832137791Z",
	            "FinishedAt": "2025-12-05T07:06:25.952595519Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hostname",
	        "HostsPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/hosts",
	        "LogPath": "/var/lib/docker/containers/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15/efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15-json.log",
	        "Name": "/embed-certs-770390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-770390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-770390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "efaf2da28c0c25540c55c153c3085f736138364fcd8bd7df2537369b12383e15",
	                "LowerDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567-init/diff:/var/lib/docker/overlay2/8c1166c19ed141e320ad1b367a085275270df686e1d58babdc6ed69439419b79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b5a2b4e10794b184e89160d47514adcc2a07fadced844b5609653e6e65b6567/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-770390",
	                "Source": "/var/lib/docker/volumes/embed-certs-770390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-770390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-770390",
	                "name.minikube.sigs.k8s.io": "embed-certs-770390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6ceedc51fff3b7c6cac40b22a355481dbcbd397954c5ec86671641d1d0faa2a7",
	            "SandboxKey": "/var/run/docker/netns/6ceedc51fff3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-770390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "931902d22986d998cad8286fbe16fdac2b5321eb6ca6ce1a3581e586ebb4b1ac",
	                    "EndpointID": "25a823ef62aebaacb31c310d5b612601cc9b7a981bbb2d0235fbdafb87a78f35",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:8d:51:2d:e5:c8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-770390",
	                        "efaf2da28c0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390: exit status 2 (311.51074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-770390 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-770390 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ image   │ old-k8s-version-874709 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p old-k8s-version-874709 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p old-k8s-version-874709                                                                                                                                                                                                                            │ old-k8s-version-874709       │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ start   │ -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ no-preload-008839 image list --format=json                                                                                                                                                                                                           │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ pause   │ -p no-preload-008839 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ delete  │ -p no-preload-008839                                                                                                                                                                                                                                 │ no-preload-008839            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-624263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │                     │
	│ stop    │ -p newest-cni-624263 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:06 UTC │ 05 Dec 25 07:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ start   │ -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ default-k8s-diff-port-172186 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p default-k8s-diff-port-172186 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p default-k8s-diff-port-172186                                                                                                                                                                                                                      │ default-k8s-diff-port-172186 │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ newest-cni-624263 image list --format=json                                                                                                                                                                                                           │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p newest-cni-624263                                                                                                                                                                                                                                 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ delete  │ -p newest-cni-624263                                                                                                                                                                                                                                 │ newest-cni-624263            │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ image   │ embed-certs-770390 image list --format=json                                                                                                                                                                                                          │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │ 05 Dec 25 07:07 UTC │
	│ pause   │ -p embed-certs-770390 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-770390           │ jenkins │ v1.37.0 │ 05 Dec 25 07:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:07:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:07:01.213912  384982 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:07:01.214313  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214349  384982 out.go:374] Setting ErrFile to fd 2...
	I1205 07:07:01.214355  384982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:01.214781  384982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 07:07:01.215653  384982 out.go:368] Setting JSON to false
	I1205 07:07:01.216724  384982 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6565,"bootTime":1764911856,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:07:01.216808  384982 start.go:143] virtualization: kvm guest
	I1205 07:07:01.218407  384982 out.go:179] * [newest-cni-624263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:07:01.219810  384982 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:07:01.219833  384982 notify.go:221] Checking for updates...
	I1205 07:07:01.222062  384982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:07:01.223099  384982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:01.224159  384982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 07:07:01.228780  384982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:07:01.229941  384982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:07:01.231538  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:01.232012  384982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:07:01.255273  384982 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 07:07:01.255390  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.307181  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.297693108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.307271  384982 docker.go:319] overlay module found
	I1205 07:07:01.308817  384982 out.go:179] * Using the docker driver based on existing profile
	I1205 07:07:01.309938  384982 start.go:309] selected driver: docker
	I1205 07:07:01.309951  384982 start.go:927] validating driver "docker" against &{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.310051  384982 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:07:01.310627  384982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:07:01.362953  384982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 07:07:01.353513591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 07:07:01.363234  384982 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:01.363265  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:01.363312  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:01.363388  384982 start.go:353] cluster config:
	{Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:01.364930  384982 out.go:179] * Starting "newest-cni-624263" primary control-plane node in "newest-cni-624263" cluster
	I1205 07:07:01.365960  384982 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:07:01.367044  384982 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	W1205 07:06:57.706664  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:06:59.707033  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:01.368093  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:01.368198  384982 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:07:01.387169  384982 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:07:01.387192  384982 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:07:01.393466  384982 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1205 07:07:01.635612  384982 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 07:07:01.635800  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.635881  384982 cache.go:107] acquiring lock: {Name:mk98363952ca1815516604fb7dbfef9be11a7d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635913  384982 cache.go:107] acquiring lock: {Name:mkf79bca1dcd2e8402871ccbd85f08189f26d5a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635887  384982 cache.go:107] acquiring lock: {Name:mk7e52439bbd1c3c582b2dbb20db8467b0caa4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635883  384982 cache.go:107] acquiring lock: {Name:mk205a6d5dedd135c0c99429d72b9328d8d5dc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.635961  384982 cache.go:107] acquiring lock: {Name:mk167c9428ef1965e0e29561c9593491905126f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636001  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:07:01.635990  384982 cache.go:107] acquiring lock: {Name:mk64ac073eb60c52be1998c1349c3f317eb7eb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636007  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:07:01.636013  384982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.69µs
	I1205 07:07:01.636037  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:07:01.636039  384982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:07:01.636031  384982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 171.708µs
	I1205 07:07:01.636003  384982 cache.go:107] acquiring lock: {Name:mk55ddd5ec022e6049bc6d750efbad0639669233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636029  384982 cache.go:107] acquiring lock: {Name:mk4eccc9886628e868c0adec616b704f1c193356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636046  384982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 87.511µs
	I1205 07:07:01.636052  384982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636064  384982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636066  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:07:01.636074  384982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 88.508µs
	I1205 07:07:01.636082  384982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:07:01.636019  384982 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 125.111µs
	I1205 07:07:01.636098  384982 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:07:01.636112  384982 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:07:01.636042  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:07:01.636150  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:07:01.636147  384982 start.go:360] acquireMachinesLock for newest-cni-624263: {Name:mka35bbd7b5824f70f8017fd9b3a0ee56ab72931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:07:01.636147  384982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 265.61µs
	I1205 07:07:01.636162  384982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636158  384982 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 197.698µs
	I1205 07:07:01.636178  384982 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:07:01.636191  384982 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-624263"
	I1205 07:07:01.636187  384982 cache.go:115] /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:07:01.636206  384982 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:07:01.636205  384982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 226.523µs
	I1205 07:07:01.636213  384982 fix.go:54] fixHost starting: 
	I1205 07:07:01.636216  384982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12758/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:07:01.636234  384982 cache.go:87] Successfully saved all images to host disk.
	I1205 07:07:01.636479  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.654206  384982 fix.go:112] recreateIfNeeded on newest-cni-624263: state=Stopped err=<nil>
	W1205 07:07:01.654241  384982 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:07:01.656485  384982 out.go:252] * Restarting existing docker container for "newest-cni-624263" ...
	I1205 07:07:01.656540  384982 cli_runner.go:164] Run: docker start newest-cni-624263
	I1205 07:07:01.895199  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:01.914785  384982 kic.go:430] container "newest-cni-624263" state is running.
	I1205 07:07:01.915225  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:01.934239  384982 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/config.json ...
	I1205 07:07:01.934479  384982 machine.go:94] provisionDockerMachine start ...
	I1205 07:07:01.934568  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:01.952380  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:01.952665  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:01.952679  384982 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:07:01.953292  384982 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55518->127.0.0.1:33138: read: connection reset by peer
	I1205 07:07:05.092419  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.092445  384982 ubuntu.go:182] provisioning hostname "newest-cni-624263"
	I1205 07:07:05.092491  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.112429  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.112718  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.112739  384982 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-624263 && echo "newest-cni-624263" | sudo tee /etc/hostname
	I1205 07:07:05.265486  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-624263
	
	I1205 07:07:05.265582  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.285453  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.285689  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.285716  384982 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-624263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-624263/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-624263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:07:05.425411  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:07:05.425436  384982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12758/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12758/.minikube}
	I1205 07:07:05.425464  384982 ubuntu.go:190] setting up certificates
	I1205 07:07:05.425475  384982 provision.go:84] configureAuth start
	I1205 07:07:05.425532  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:05.443549  384982 provision.go:143] copyHostCerts
	I1205 07:07:05.443614  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem, removing ...
	I1205 07:07:05.443629  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem
	I1205 07:07:05.443700  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/ca.pem (1082 bytes)
	I1205 07:07:05.443800  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem, removing ...
	I1205 07:07:05.443816  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem
	I1205 07:07:05.443845  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/cert.pem (1123 bytes)
	I1205 07:07:05.443904  384982 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem, removing ...
	I1205 07:07:05.443915  384982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem
	I1205 07:07:05.443950  384982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12758/.minikube/key.pem (1679 bytes)
	I1205 07:07:05.444023  384982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem org=jenkins.newest-cni-624263 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-624263]
	I1205 07:07:05.672635  384982 provision.go:177] copyRemoteCerts
	I1205 07:07:05.672684  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:07:05.672730  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.690043  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:05.792000  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:07:05.810085  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:07:05.827489  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:07:05.844988  384982 provision.go:87] duration metric: took 419.49922ms to configureAuth
	I1205 07:07:05.845013  384982 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:07:05.845213  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:05.845355  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:05.868784  384982 main.go:143] libmachine: Using SSH client type: native
	I1205 07:07:05.868985  384982 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 07:07:05.869010  384982 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:07:06.168481  384982 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:07:06.168508  384982 machine.go:97] duration metric: took 4.234011493s to provisionDockerMachine
	I1205 07:07:06.168521  384982 start.go:293] postStartSetup for "newest-cni-624263" (driver="docker")
	I1205 07:07:06.168536  384982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:07:06.168593  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:07:06.168662  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.188502  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	W1205 07:07:02.207380  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:04.704952  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:06.292387  384982 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:07:06.295922  384982 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:07:06.295950  384982 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:07:06.295961  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/addons for local assets ...
	I1205 07:07:06.296006  384982 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12758/.minikube/files for local assets ...
	I1205 07:07:06.296104  384982 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I1205 07:07:06.296231  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:07:06.303904  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:06.321264  384982 start.go:296] duration metric: took 152.731097ms for postStartSetup
	I1205 07:07:06.321343  384982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:06.321386  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.342624  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.439978  384982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:07:06.444248  384982 fix.go:56] duration metric: took 4.8080316s for fixHost
	I1205 07:07:06.444268  384982 start.go:83] releasing machines lock for "newest-cni-624263", held for 4.808068962s
	I1205 07:07:06.444356  384982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-624263
	I1205 07:07:06.461188  384982 ssh_runner.go:195] Run: cat /version.json
	I1205 07:07:06.461224  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.461315  384982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:07:06.461389  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:06.479772  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.480279  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:06.758196  384982 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:06.764592  384982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:07:06.798459  384982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:07:06.802811  384982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:07:06.802860  384982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:07:06.810439  384982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:07:06.810458  384982 start.go:496] detecting cgroup driver to use...
	I1205 07:07:06.810483  384982 detect.go:190] detected "systemd" cgroup driver on host os
	I1205 07:07:06.810515  384982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:07:06.823596  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:07:06.835347  384982 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:07:06.835386  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:07:06.849102  384982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:07:06.861013  384982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:07:06.946233  384982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:07:07.034814  384982 docker.go:234] disabling docker service ...
	I1205 07:07:07.034859  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:07:07.048490  384982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:07:07.062338  384982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:07:07.152172  384982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:07:07.242359  384982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:07:07.254816  384982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:07:07.268657  384982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:07:07.268723  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.277649  384982 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:07:07.277721  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.287203  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.296720  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.305673  384982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:07:07.314603  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.323209  384982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.331118  384982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:07:07.339939  384982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:07:07.346935  384982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:07:07.354783  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.445879  384982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:07:07.588541  384982 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:07:07.588604  384982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:07:07.594687  384982 start.go:564] Will wait 60s for crictl version
	I1205 07:07:07.595153  384982 ssh_runner.go:195] Run: which crictl
	I1205 07:07:07.598691  384982 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:07:07.626384  384982 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:07:07.626465  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.656627  384982 ssh_runner.go:195] Run: crio --version
	I1205 07:07:07.691598  384982 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:07:07.692738  384982 cli_runner.go:164] Run: docker network inspect newest-cni-624263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:07:07.715101  384982 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1205 07:07:07.719286  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.731914  384982 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:07:07.733217  384982 kubeadm.go:884] updating cluster {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:07:07.733394  384982 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:07:07.733451  384982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:07:07.764980  384982 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:07:07.765003  384982 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:07:07.765012  384982 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:07:07.765132  384982 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-624263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:07:07.765207  384982 ssh_runner.go:195] Run: crio config
	I1205 07:07:07.812534  384982 cni.go:84] Creating CNI manager for ""
	I1205 07:07:07.812555  384982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:07:07.812573  384982 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:07:07.812604  384982 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-624263 NodeName:newest-cni-624263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:07:07.812765  384982 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-624263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:07:07.812831  384982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:07:07.820594  384982 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:07:07.820653  384982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:07:07.828109  384982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:07:07.840571  384982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:07:07.852346  384982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:07:07.864062  384982 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:07:07.867420  384982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:07:07.876647  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:07.969578  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:07.991685  384982 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263 for IP: 192.168.103.2
	I1205 07:07:07.991713  384982 certs.go:195] generating shared ca certs ...
	I1205 07:07:07.991735  384982 certs.go:227] acquiring lock for ca certs: {Name:mk9c106269961caa11a83b814f66e7b661228d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:07.991888  384982 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key
	I1205 07:07:07.991947  384982 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key
	I1205 07:07:07.991961  384982 certs.go:257] generating profile certs ...
	I1205 07:07:07.992079  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/client.key
	I1205 07:07:07.992226  384982 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key.2a250ada
	I1205 07:07:07.992293  384982 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key
	I1205 07:07:07.992512  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem (1338 bytes)
	W1205 07:07:07.992567  384982 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I1205 07:07:07.992584  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:07:07.992622  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:07:07.992661  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:07:07.992697  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/certs/key.pem (1679 bytes)
	I1205 07:07:07.992768  384982 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I1205 07:07:07.993641  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:07:08.013632  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:07:08.033788  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:07:08.054106  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:07:08.078883  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:07:08.099768  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:07:08.116845  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:07:08.135382  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/newest-cni-624263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:07:08.152628  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I1205 07:07:08.169338  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:07:08.186981  384982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12758/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I1205 07:07:08.206005  384982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:07:08.218973  384982 ssh_runner.go:195] Run: openssl version
	I1205 07:07:08.224889  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.231834  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem
	I1205 07:07:08.238627  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242398  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:23 /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.242447  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I1205 07:07:08.277264  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:07:08.284110  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.290922  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:07:08.298213  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301760  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.301803  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:07:08.338438  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:07:08.345749  384982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.353668  384982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem
	I1205 07:07:08.361252  384982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364769  384982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:23 /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.364816  384982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I1205 07:07:08.405377  384982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:07:08.413075  384982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:07:08.416868  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:07:08.453487  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:07:08.487644  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:07:08.533187  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:07:08.593546  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:07:08.653721  384982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:07:08.709159  384982 kubeadm.go:401] StartCluster: {Name:newest-cni-624263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-624263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:07:08.709282  384982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:07:08.709349  384982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:07:08.737962  384982 cri.go:89] found id: "d0abfce5c087bc9745f6cbf4f3fb0edbb94d2f33857125e80fac708771ec2b48"
	I1205 07:07:08.737982  384982 cri.go:89] found id: "b7dd1526bcbcdee4bcb466e7fb00e9c6e45c6a7c643eaff455cc39e8cadcb7d0"
	I1205 07:07:08.737987  384982 cri.go:89] found id: "ff2c7439c6494a7c11b9c98603177548654b07fa8af90217d8bc284c40e1913f"
	I1205 07:07:08.737992  384982 cri.go:89] found id: "5bbad9411c1730fb8fc31fd993b9c05654fd82cb5d89486f02679e687a86062c"
	I1205 07:07:08.737996  384982 cri.go:89] found id: ""
	I1205 07:07:08.738037  384982 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:07:08.749927  384982 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:07:08.750001  384982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:07:08.757435  384982 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:07:08.757451  384982 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:07:08.757493  384982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:07:08.764462  384982 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:07:08.765259  384982 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-624263" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.765847  384982 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12758/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-624263" cluster setting kubeconfig missing "newest-cni-624263" context setting]
	I1205 07:07:08.766845  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.768427  384982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:07:08.775598  384982 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1205 07:07:08.775623  384982 kubeadm.go:602] duration metric: took 18.165924ms to restartPrimaryControlPlane
	I1205 07:07:08.775632  384982 kubeadm.go:403] duration metric: took 66.480576ms to StartCluster
	I1205 07:07:08.775648  384982 settings.go:142] acquiring lock: {Name:mk457445011de2de243f69c0d90322aa5f921211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.775713  384982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 07:07:08.777693  384982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12758/kubeconfig: {Name:mk572c9767c266d1d9dcdf01ee8c7de8cfd10ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:07:08.777931  384982 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:07:08.777993  384982 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:07:08.778091  384982 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-624263"
	I1205 07:07:08.778111  384982 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-624263"
	W1205 07:07:08.778120  384982 addons.go:248] addon storage-provisioner should already be in state true
	I1205 07:07:08.778116  384982 addons.go:70] Setting dashboard=true in profile "newest-cni-624263"
	I1205 07:07:08.778140  384982 addons.go:239] Setting addon dashboard=true in "newest-cni-624263"
	W1205 07:07:08.778150  384982 addons.go:248] addon dashboard should already be in state true
	I1205 07:07:08.778164  384982 config.go:182] Loaded profile config "newest-cni-624263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:07:08.778186  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778150  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.778139  384982 addons.go:70] Setting default-storageclass=true in profile "newest-cni-624263"
	I1205 07:07:08.778303  384982 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-624263"
	I1205 07:07:08.778585  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778752  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.778783  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.779765  384982 out.go:179] * Verifying Kubernetes components...
	I1205 07:07:08.780933  384982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:07:08.804889  384982 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:07:08.804889  384982 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:07:08.806580  384982 addons.go:239] Setting addon default-storageclass=true in "newest-cni-624263"
	W1205 07:07:08.806597  384982 addons.go:248] addon default-storageclass should already be in state true
	I1205 07:07:08.806617  384982 host.go:66] Checking if "newest-cni-624263" exists ...
	I1205 07:07:08.806903  384982 cli_runner.go:164] Run: docker container inspect newest-cni-624263 --format={{.State.Status}}
	I1205 07:07:08.807441  384982 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.807461  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:07:08.807530  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.808424  384982 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:07:08.809309  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:07:08.809353  384982 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:07:08.809407  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.834751  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.836077  384982 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.836291  384982 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:07:08.837052  384982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-624263
	I1205 07:07:08.842660  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.859675  384982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/newest-cni-624263/id_rsa Username:docker}
	I1205 07:07:08.933525  384982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:07:08.947274  384982 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:07:08.947358  384982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:08.951314  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:07:08.951373  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:07:08.952715  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:07:08.960188  384982 api_server.go:72] duration metric: took 182.229824ms to wait for apiserver process to appear ...
	I1205 07:07:08.960210  384982 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:07:08.960226  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:08.965821  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:07:08.965841  384982 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:07:08.967346  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:07:08.980049  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:07:08.980067  384982 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:07:08.994281  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:07:08.994299  384982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:07:09.008287  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:07:09.008306  384982 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:07:09.021481  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:07:09.021501  384982 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:07:09.034096  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:07:09.034115  384982 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:07:09.046446  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:07:09.046466  384982 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:07:09.058389  384982 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:09.058405  384982 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:07:09.070248  384982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:07:10.183992  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 07:07:10.184023  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 07:07:10.184136  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.262013  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.262086  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.460707  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.465761  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.465796  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.858674166s)
	I1205 07:07:10.811423  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.8440466s)
	I1205 07:07:10.811561  384982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.741287368s)
	I1205 07:07:10.815716  384982 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-624263 addons enable metrics-server
	
	I1205 07:07:10.822997  384982 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1205 07:07:10.824128  384982 addons.go:530] duration metric: took 2.046144375s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1205 07:07:10.961075  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:10.965412  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:07:10.965439  384982 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:07:11.461149  384982 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1205 07:07:11.465102  384982 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1205 07:07:11.466004  384982 api_server.go:141] control plane version: v1.35.0-beta.0
	I1205 07:07:11.466025  384982 api_server.go:131] duration metric: took 2.505809422s to wait for apiserver health ...
	I1205 07:07:11.466034  384982 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:07:11.469408  384982 system_pods.go:59] 8 kube-system pods found
	I1205 07:07:11.469441  384982 system_pods.go:61] "coredns-7d764666f9-jkmhj" [126785e3-c7a3-451f-ac72-e05d87bb32f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469449  384982 system_pods.go:61] "etcd-newest-cni-624263" [9a4fe128-6030-4681-b201-a2a13ac29474] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:07:11.469475  384982 system_pods.go:61] "kindnet-fctwl" [29a59939-b66c-4796-9a9e-e1b442bccf1f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 07:07:11.469490  384982 system_pods.go:61] "kube-apiserver-newest-cni-624263" [2fc9852f-c8d5-41c2-8dbe-41056e227d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:07:11.469499  384982 system_pods.go:61] "kube-controller-manager-newest-cni-624263" [957b864f-8ee5-40ce-9e1f-4396041c4525] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:07:11.469510  384982 system_pods.go:61] "kube-proxy-8v5qr" [59595bdd-49dc-4491-b494-1c48474ea8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:07:11.469520  384982 system_pods.go:61] "kube-scheduler-newest-cni-624263" [a3c04907-1ac1-43af-827b-b4ab46dd553c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:07:11.469533  384982 system_pods.go:61] "storage-provisioner" [1cfc97af-739e-4ee9-838a-75962c29bc63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1205 07:07:11.469542  384982 system_pods.go:74] duration metric: took 3.503315ms to wait for pod list to return data ...
	I1205 07:07:11.469551  384982 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:07:11.471664  384982 default_sa.go:45] found service account: "default"
	I1205 07:07:11.471681  384982 default_sa.go:55] duration metric: took 2.121784ms for default service account to be created ...
	I1205 07:07:11.471691  384982 kubeadm.go:587] duration metric: took 2.693735692s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:07:11.471704  384982 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:07:11.473883  384982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 07:07:11.473903  384982 node_conditions.go:123] node cpu capacity is 8
	I1205 07:07:11.473915  384982 node_conditions.go:105] duration metric: took 2.207592ms to run NodePressure ...
	I1205 07:07:11.473924  384982 start.go:242] waiting for startup goroutines ...
	I1205 07:07:11.473931  384982 start.go:247] waiting for cluster config update ...
	I1205 07:07:11.473942  384982 start.go:256] writing updated cluster config ...
	I1205 07:07:11.474153  384982 ssh_runner.go:195] Run: rm -f paused
	I1205 07:07:11.522329  384982 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1205 07:07:11.524757  384982 out.go:179] * Done! kubectl is now configured to use "newest-cni-624263" cluster and "default" namespace by default
	W1205 07:07:06.706696  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	W1205 07:07:08.706849  375543 pod_ready.go:104] pod "coredns-66bc5c9577-rg55r" is not "Ready", error: <nil>
	I1205 07:07:10.705104  375543 pod_ready.go:94] pod "coredns-66bc5c9577-rg55r" is "Ready"
	I1205 07:07:10.705136  375543 pod_ready.go:86] duration metric: took 31.504740744s for pod "coredns-66bc5c9577-rg55r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.707363  375543 pod_ready.go:83] waiting for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.711598  375543 pod_ready.go:94] pod "etcd-embed-certs-770390" is "Ready"
	I1205 07:07:10.711616  375543 pod_ready.go:86] duration metric: took 4.234427ms for pod "etcd-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.713476  375543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.717163  375543 pod_ready.go:94] pod "kube-apiserver-embed-certs-770390" is "Ready"
	I1205 07:07:10.717181  375543 pod_ready.go:86] duration metric: took 3.676871ms for pod "kube-apiserver-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.719115  375543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:10.903969  375543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-770390" is "Ready"
	I1205 07:07:10.903993  375543 pod_ready.go:86] duration metric: took 184.859493ms for pod "kube-controller-manager-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.104836  375543 pod_ready.go:83] waiting for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.504196  375543 pod_ready.go:94] pod "kube-proxy-7bjnn" is "Ready"
	I1205 07:07:11.504227  375543 pod_ready.go:86] duration metric: took 399.358917ms for pod "kube-proxy-7bjnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:11.703987  375543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103435  375543 pod_ready.go:94] pod "kube-scheduler-embed-certs-770390" is "Ready"
	I1205 07:07:12.103462  375543 pod_ready.go:86] duration metric: took 399.448083ms for pod "kube-scheduler-embed-certs-770390" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:07:12.103479  375543 pod_ready.go:40] duration metric: took 32.906123608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:07:12.153648  375543 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:07:12.156415  375543 out.go:179] * Done! kubectl is now configured to use "embed-certs-770390" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.312472731Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.620296249Z" level=info msg="Removing container: db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb" id=c12490fe-4c60-4680-9206-e860ef62215a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:06:49 embed-certs-770390 crio[566]: time="2025-12-05T07:06:49.629083593Z" level=info msg="Removed container db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=c12490fe-4c60-4680-9206-e860ef62215a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.534455851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=052bf5da-264e-42e7-96ef-8475e2511678 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.535297088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=688559f8-07f9-4f75-be45-735187bb5298 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.536249603Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=6b84157f-3f49-44b6-af86-bca1eb109282 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.536404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.542195335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.542763022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.572248351Z" level=info msg="Created container 9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=6b84157f-3f49-44b6-af86-bca1eb109282 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.572938735Z" level=info msg="Starting container: 9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019" id=ffdbf77d-e7c7-4e84-a357-6682fed5d3b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.574821511Z" level=info msg="Started container" PID=1770 containerID=9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper id=ffdbf77d-e7c7-4e84-a357-6682fed5d3b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cf77530f15cfe0aec2b806ebcba4885341957f6733ea5c37d1d0a62ad7664c2
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.659100895Z" level=info msg="Removing container: 3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5" id=cc692de6-d2b7-48bf-9858-16d981713232 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:04 embed-certs-770390 crio[566]: time="2025-12-05T07:07:04.668238707Z" level=info msg="Removed container 3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn/dashboard-metrics-scraper" id=cc692de6-d2b7-48bf-9858-16d981713232 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.67614833Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1f5fa27-e31f-4ff5-988f-36e6825b9a0c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.677210361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e1b0b009-4d59-4319-8ce4-5883717e2b00 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.678299148Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=92a29cf3-c6e9-4a82-84a3-6dbecff38520 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.678457865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.684914633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.685099769Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5318a04f807393c71cb682803983451dfdd1516c94174b2b31918b49a6003444/merged/etc/passwd: no such file or directory"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.685132975Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5318a04f807393c71cb682803983451dfdd1516c94174b2b31918b49a6003444/merged/etc/group: no such file or directory"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.686146409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.712385755Z" level=info msg="Created container 1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088: kube-system/storage-provisioner/storage-provisioner" id=92a29cf3-c6e9-4a82-84a3-6dbecff38520 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.713117951Z" level=info msg="Starting container: 1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088" id=e73ac87d-c594-41d0-973a-eade688d2fa6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:07:09 embed-certs-770390 crio[566]: time="2025-12-05T07:07:09.715695817Z" level=info msg="Started container" PID=1784 containerID=1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088 description=kube-system/storage-provisioner/storage-provisioner id=e73ac87d-c594-41d0-973a-eade688d2fa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52ddfd3bef236bb4d590b8ae271cfd0265c1a67ba07636fa86992a41b62dc6d0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1aa7cd837236b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   52ddfd3bef236       storage-provisioner                          kube-system
	9392561830b7e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   8cf77530f15cf       dashboard-metrics-scraper-6ffb444bf9-jp5dn   kubernetes-dashboard
	7a3eada6f877e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   96f8238f33446       kubernetes-dashboard-855c9754f9-2kzfd        kubernetes-dashboard
	df691a881bd88       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   024bdc6d12081       coredns-66bc5c9577-rg55r                     kube-system
	6b44f3ce66c53       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   d8a3231ca816b       busybox                                      default
	688c23ae1eefd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   fedd58577705d       kindnet-dmpt2                                kube-system
	ee851fb4ae660       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   6c71597e06f39       kube-proxy-7bjnn                             kube-system
	6177c64055ee5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   52ddfd3bef236       storage-provisioner                          kube-system
	2e99e708af8cd       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   c21194c4aff04       etcd-embed-certs-770390                      kube-system
	4d4e5c825a7de       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   5eb9be070d018       kube-controller-manager-embed-certs-770390   kube-system
	923febfdc8bcc       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   1375fa901891d       kube-apiserver-embed-certs-770390            kube-system
	ae1745cf83f11       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   5a6bad199c30d       kube-scheduler-embed-certs-770390            kube-system
	
	
	==> coredns [df691a881bd8857e9f27b30400e75e80f5c1dd193eeaa849cf64bcb156b4f2bc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49584 - 63854 "HINFO IN 123180335028135115.6838869531824761202. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.109626967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-770390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-770390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=embed-certs-770390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-770390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:07:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:07:18 +0000   Fri, 05 Dec 2025 07:05:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-770390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                6db5accb-9611-4107-b9f0-962216d17807
	  Boot ID:                    c4c5d62c-b804-4e63-b53e-a6c9d3926d9c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-rg55r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m15s
	  kube-system                 etcd-embed-certs-770390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-dmpt2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-embed-certs-770390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-770390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-7bjnn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-embed-certs-770390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jp5dn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2kzfd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m12s              kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s              kubelet          Node embed-certs-770390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s              kubelet          Node embed-certs-770390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s              kubelet          Node embed-certs-770390 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m21s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m16s              node-controller  Node embed-certs-770390 event: Registered Node embed-certs-770390 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-770390 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-770390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-770390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-770390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-770390 event: Registered Node embed-certs-770390 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +0.032037] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 c4 57 8e be c5 08 06
	[ +22.000477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 c2 77 1a 1a f4 08 06
	[  +0.000285] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 b5 4a 00 cf 4e 08 06
	[ +21.180292] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[Dec 5 07:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 56 2e 5d 65 64 08 06
	[  +0.000385] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 6c d5 80 8e 01 08 06
	[  +5.755957] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	[  +0.008397] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a a5 6f 95 89 46 08 06
	[  +4.110998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 ed b0 bb 24 e2 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 72 97 25 30 b9 08 06
	[ +10.860368] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 7b a3 d6 6a 3e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0a 04 2e 26 e3 08 06
	
	
	==> etcd [2e99e708af8cdf7e8644b2c854970fe3b2f9131df99f8ff6c3a19b08659e1df2] <==
	{"level":"warn","ts":"2025-12-05T07:06:36.897347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.910034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.916525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.923107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.929735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.936768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.942993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.951966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.959384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.966392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.976463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.982518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.989986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:36.997315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.003507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.010975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.017438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.031702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.039259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.047008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.053526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.066961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.074539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.081921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:06:37.134965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40868","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:07:28 up  1:49,  0 user,  load average: 3.09, 3.24, 2.27
	Linux embed-certs-770390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [688c23ae1eefd91ac5bf2ce60c2ea6c1c9f585b311b36fd061bffce62338bb1c] <==
	I1205 07:06:39.179176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:06:39.179502       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1205 07:06:39.179687       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:06:39.179710       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:06:39.179739       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:06:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:06:39.287817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:06:39.287873       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:06:39.287892       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:06:39.288011       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:06:39.756694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:06:39.756768       1 metrics.go:72] Registering metrics
	I1205 07:06:39.756888       1 controller.go:711] "Syncing nftables rules"
	I1205 07:06:49.288496       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:06:49.288557       1 main.go:301] handling current node
	I1205 07:06:59.292168       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:06:59.292200       1 main.go:301] handling current node
	I1205 07:07:09.288487       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:07:09.288531       1 main.go:301] handling current node
	I1205 07:07:19.287851       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 07:07:19.287881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [923febfdc8bccb1ad8239b49c08d7497c407d21accd38106c20a1aba8cecaffb] <==
	I1205 07:06:37.623984       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:06:37.624243       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1205 07:06:37.624255       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 07:06:37.624405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:06:37.624454       1 aggregator.go:171] initial CRD sync complete...
	I1205 07:06:37.624465       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:06:37.624470       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:06:37.624476       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:06:37.624927       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:06:37.625019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:06:37.643184       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:06:37.652878       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:06:37.658199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:06:37.706888       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:06:37.928962       1 controller.go:667] quota admission added evaluator for: namespaces
	I1205 07:06:37.960631       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 07:06:37.979593       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 07:06:37.986254       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 07:06:37.993847       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:06:38.026155       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.156.217"}
	I1205 07:06:38.035956       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.78.172"}
	I1205 07:06:38.527181       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:06:40.951044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:06:41.301299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:06:41.450625       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d4e5c825a7de3068675039cb3151e44142096587a1c8f2d75ad7ecbd5429caa] <==
	I1205 07:06:40.936988       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:06:40.947369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 07:06:40.947384       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 07:06:40.947419       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 07:06:40.947497       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 07:06:40.947549       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:06:40.948638       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 07:06:40.948681       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1205 07:06:40.948683       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 07:06:40.948798       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:06:40.948892       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:06:40.948914       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:06:40.948971       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:06:40.949057       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-770390"
	I1205 07:06:40.949100       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:06:40.952624       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:06:40.952626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:06:40.956901       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:06:40.959080       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1205 07:06:40.959176       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1205 07:06:40.960281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1205 07:06:40.960317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1205 07:06:40.962519       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 07:06:40.963680       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1205 07:06:40.965946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ee851fb4ae660958b7ef530ba88b955a76f13d0142203ad5c0fc539d6d40c0d8] <==
	I1205 07:06:38.951303       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:06:39.021598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:06:39.122480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:06:39.122554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1205 07:06:39.122664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:06:39.141774       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:06:39.141839       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:06:39.147984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:06:39.148373       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:06:39.148407       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:39.150020       1 config.go:309] "Starting node config controller"
	I1205 07:06:39.150037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:06:39.150110       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:06:39.150134       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:06:39.150168       1 config.go:200] "Starting service config controller"
	I1205 07:06:39.150177       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:06:39.150188       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:06:39.150199       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:06:39.250206       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:06:39.250248       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:06:39.250218       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:06:39.250242       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ae1745cf83f11e7391209efe832ac4ca4aab557828ba3aab75cf48e7fe75b73f] <==
	I1205 07:06:35.378914       1 serving.go:386] Generated self-signed cert in-memory
	W1205 07:06:37.595380       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 07:06:37.595421       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 07:06:37.595520       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 07:06:37.595530       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 07:06:37.621580       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:06:37.621669       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:06:37.624779       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:06:37.624914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:37.624934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:06:37.624953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:06:37.725928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574678     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7fc53b6c-2249-43c2-9989-72cc5652b20b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2kzfd\" (UID: \"7fc53b6c-2249-43c2-9989-72cc5652b20b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574730     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpphw\" (UniqueName: \"kubernetes.io/projected/7fc53b6c-2249-43c2-9989-72cc5652b20b-kube-api-access-xpphw\") pod \"kubernetes-dashboard-855c9754f9-2kzfd\" (UID: \"7fc53b6c-2249-43c2-9989-72cc5652b20b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574762     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxj6\" (UniqueName: \"kubernetes.io/projected/8bd2761b-8c0a-4674-a8d4-9f688fdcfb79-kube-api-access-bnxj6\") pod \"dashboard-metrics-scraper-6ffb444bf9-jp5dn\" (UID: \"8bd2761b-8c0a-4674-a8d4-9f688fdcfb79\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn"
	Dec 05 07:06:41 embed-certs-770390 kubelet[727]: I1205 07:06:41.574787     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8bd2761b-8c0a-4674-a8d4-9f688fdcfb79-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jp5dn\" (UID: \"8bd2761b-8c0a-4674-a8d4-9f688fdcfb79\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn"
	Dec 05 07:06:45 embed-certs-770390 kubelet[727]: I1205 07:06:45.624670     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2kzfd" podStartSLOduration=1.220671341 podStartE2EDuration="4.62464589s" podCreationTimestamp="2025-12-05 07:06:41 +0000 UTC" firstStartedPulling="2025-12-05 07:06:41.843989784 +0000 UTC m=+7.422301541" lastFinishedPulling="2025-12-05 07:06:45.247964341 +0000 UTC m=+10.826276090" observedRunningTime="2025-12-05 07:06:45.623994333 +0000 UTC m=+11.202306106" watchObservedRunningTime="2025-12-05 07:06:45.62464589 +0000 UTC m=+11.202957651"
	Dec 05 07:06:48 embed-certs-770390 kubelet[727]: I1205 07:06:48.612137     727 scope.go:117] "RemoveContainer" containerID="db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: I1205 07:06:49.617189     727 scope.go:117] "RemoveContainer" containerID="db71dba4101ae9b6f145472ffb54e42cc079509d55e60c256b70d474c59600bb"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: I1205 07:06:49.617389     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:49 embed-certs-770390 kubelet[727]: E1205 07:06:49.617633     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:06:50 embed-certs-770390 kubelet[727]: I1205 07:06:50.621542     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:50 embed-certs-770390 kubelet[727]: E1205 07:06:50.621743     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:06:52 embed-certs-770390 kubelet[727]: I1205 07:06:52.050123     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:06:52 embed-certs-770390 kubelet[727]: E1205 07:06:52.050414     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.534044     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.657841     727 scope.go:117] "RemoveContainer" containerID="3d0d5feaf20b44a2bb56a8cd729cbfb115904319673f8bf3518fd736543909d5"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: I1205 07:07:04.658030     727 scope.go:117] "RemoveContainer" containerID="9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	Dec 05 07:07:04 embed-certs-770390 kubelet[727]: E1205 07:07:04.658216     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:09 embed-certs-770390 kubelet[727]: I1205 07:07:09.675671     727 scope.go:117] "RemoveContainer" containerID="6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b"
	Dec 05 07:07:12 embed-certs-770390 kubelet[727]: I1205 07:07:12.050063     727 scope.go:117] "RemoveContainer" containerID="9392561830b7eda150b3dfbacf8f286830e421439e50f91b4698c7ac175ad019"
	Dec 05 07:07:12 embed-certs-770390 kubelet[727]: E1205 07:07:12.050281     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jp5dn_kubernetes-dashboard(8bd2761b-8c0a-4674-a8d4-9f688fdcfb79)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jp5dn" podUID="8bd2761b-8c0a-4674-a8d4-9f688fdcfb79"
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:07:24 embed-certs-770390 kubelet[727]: I1205 07:07:24.202475     727 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:07:24 embed-certs-770390 systemd[1]: kubelet.service: Consumed 1.529s CPU time.
	
	
	==> kubernetes-dashboard [7a3eada6f877e1286c7e6a656066b8252366921900d5eaa0ad8a32a8ddfb215e] <==
	2025/12/05 07:06:45 Using namespace: kubernetes-dashboard
	2025/12/05 07:06:45 Using in-cluster config to connect to apiserver
	2025/12/05 07:06:45 Using secret token for csrf signing
	2025/12/05 07:06:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/05 07:06:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/05 07:06:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/05 07:06:45 Generating JWE encryption key
	2025/12/05 07:06:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/05 07:06:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/05 07:06:45 Initializing JWE encryption key from synchronized object
	2025/12/05 07:06:45 Creating in-cluster Sidecar client
	2025/12/05 07:06:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:45 Serving insecurely on HTTP port: 9090
	2025/12/05 07:07:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/05 07:06:45 Starting overwatch
	
	
	==> storage-provisioner [1aa7cd837236b0ef2827c6c01929b44fed4339d14138d8ef55d233b2f13d2088] <==
	I1205 07:07:09.729519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 07:07:09.738350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 07:07:09.738389       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 07:07:09.740801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:13.195497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:17.455408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:21.053171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:24.109839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:27.131712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:27.136347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:27.136560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 07:07:27.136659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2811ca68-8b79-41ee-908b-89fe569de67c", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-770390_3d0b39d3-4a54-4991-b0ab-dd2f5b142a28 became leader
	I1205 07:07:27.136743       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-770390_3d0b39d3-4a54-4991-b0ab-dd2f5b142a28!
	W1205 07:07:27.138831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 07:07:27.143244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 07:07:27.236989       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-770390_3d0b39d3-4a54-4991-b0ab-dd2f5b142a28!
	
	
	==> storage-provisioner [6177c64055ee5f3bacac5f8934dc2061c6a6b0d2a95b03bf4373af7a3cbcaf0b] <==
	I1205 07:06:38.910655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 07:07:08.913285       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-770390 -n embed-certs-770390
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-770390 -n embed-certs-770390: exit status 2 (305.01378ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-770390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.01s)

                                                
                                    

Test pass (334/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.02
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.93
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.2
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.8
31 TestOffline 53.5
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 123.08
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 7.4
57 TestAddons/StoppedEnableDisable 16.71
58 TestCertOptions 25.88
59 TestCertExpiration 209.51
61 TestForceSystemdFlag 23.25
62 TestForceSystemdEnv 33.57
67 TestErrorSpam/setup 17.66
68 TestErrorSpam/start 0.62
69 TestErrorSpam/status 0.91
70 TestErrorSpam/pause 5.9
71 TestErrorSpam/unpause 5.98
72 TestErrorSpam/stop 18.08
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 65.69
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.99
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
84 TestFunctional/serial/CacheCmd/cache/add_local 1.12
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 47.63
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.14
95 TestFunctional/serial/LogsFileCmd 1.17
96 TestFunctional/serial/InvalidService 3.94
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 6.64
100 TestFunctional/parallel/DryRun 0.35
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.06
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 22.5
110 TestFunctional/parallel/SSHCmd 0.55
111 TestFunctional/parallel/CpCmd 1.79
112 TestFunctional/parallel/MySQL 16.58
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.64
118 TestFunctional/parallel/NodeLabels 0.05
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
122 TestFunctional/parallel/License 0.42
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
125 TestFunctional/parallel/ProfileCmd/profile_list 0.46
126 TestFunctional/parallel/MountCmd/any-port 5.64
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.18
133 TestFunctional/parallel/MountCmd/specific-port 2.02
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
145 TestFunctional/parallel/ImageCommands/ImageBuild 5.99
146 TestFunctional/parallel/ImageCommands/Setup 0.94
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
154 TestFunctional/parallel/Version/short 0.06
155 TestFunctional/parallel/Version/components 0.46
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
159 TestFunctional/parallel/ServiceCmd/List 1.69
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.01
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 44.77
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.01
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.55
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.49
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.11
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 40.76
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.12
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.14
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.83
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 7.24
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.35
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.15
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.9
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 25.6
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.93
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 15.43
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.85
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.62
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.21
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.28
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.45
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 13.25
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.63
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.67
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.69
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.56
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.69
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.69
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 134.16
266 TestMultiControlPlane/serial/DeployApp 3.82
267 TestMultiControlPlane/serial/PingHostFromPods 0.98
268 TestMultiControlPlane/serial/AddWorkerNode 56.45
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
271 TestMultiControlPlane/serial/CopyFile 16.48
272 TestMultiControlPlane/serial/StopSecondaryNode 9.62
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.29
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 91.49
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.45
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
279 TestMultiControlPlane/serial/StopCluster 37.93
280 TestMultiControlPlane/serial/RestartCluster 60.64
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
282 TestMultiControlPlane/serial/AddSecondaryNode 38.56
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
288 TestJSONOutput/start/Command 39.57
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.9
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.21
313 TestKicCustomNetwork/create_custom_network 26.66
314 TestKicCustomNetwork/use_default_bridge_network 21.09
315 TestKicExistingNetwork 21.74
316 TestKicCustomSubnet 26.32
317 TestKicStaticIP 21.94
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 48.74
322 TestMountStart/serial/StartWithMountFirst 7.57
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 4.69
325 TestMountStart/serial/VerifyMountSecond 0.26
326 TestMountStart/serial/DeleteFirst 1.65
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.23
329 TestMountStart/serial/RestartStopped 7.2
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 92.75
334 TestMultiNode/serial/DeployApp2Nodes 3.5
335 TestMultiNode/serial/PingHostFrom2Pods 0.68
336 TestMultiNode/serial/AddNode 22.95
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.63
339 TestMultiNode/serial/CopyFile 9.39
340 TestMultiNode/serial/StopNode 2.2
341 TestMultiNode/serial/StartAfterStop 7.04
342 TestMultiNode/serial/RestartKeepsNodes 79.36
343 TestMultiNode/serial/DeleteNode 5.17
344 TestMultiNode/serial/StopMultiNode 28.44
345 TestMultiNode/serial/RestartMultiNode 25.27
346 TestMultiNode/serial/ValidateNameConflict 21.36
351 TestPreload 82.94
353 TestScheduledStopUnix 97.33
356 TestInsufficientStorage 8.58
357 TestRunningBinaryUpgrade 291.8
359 TestKubernetesUpgrade 315.06
360 TestMissingContainerUpgrade 73.19
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
364 TestPause/serial/Start 46.92
365 TestNoKubernetes/serial/StartWithK8s 34.26
366 TestStoppedBinaryUpgrade/Setup 0.59
367 TestStoppedBinaryUpgrade/Upgrade 303.29
368 TestNoKubernetes/serial/StartWithStopK8s 27.2
369 TestPause/serial/SecondStartNoReconfiguration 13.99
371 TestNoKubernetes/serial/Start 4.31
372 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
373 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
374 TestNoKubernetes/serial/ProfileList 4.76
375 TestNoKubernetes/serial/Stop 3.29
376 TestNoKubernetes/serial/StartNoArgs 6.79
377 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
385 TestNetworkPlugins/group/false 3.55
396 TestNetworkPlugins/group/auto/Start 39.47
397 TestStoppedBinaryUpgrade/MinikubeLogs 1
398 TestNetworkPlugins/group/kindnet/Start 38.04
399 TestNetworkPlugins/group/auto/KubeletFlags 0.3
400 TestNetworkPlugins/group/auto/NetCatPod 9.23
401 TestNetworkPlugins/group/auto/DNS 0.12
402 TestNetworkPlugins/group/auto/Localhost 0.09
403 TestNetworkPlugins/group/auto/HairPin 0.09
404 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
405 TestNetworkPlugins/group/calico/Start 49.52
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
407 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
408 TestNetworkPlugins/group/kindnet/DNS 0.12
409 TestNetworkPlugins/group/kindnet/Localhost 0.1
410 TestNetworkPlugins/group/kindnet/HairPin 0.09
411 TestNetworkPlugins/group/custom-flannel/Start 57.4
412 TestNetworkPlugins/group/enable-default-cni/Start 70.59
413 TestNetworkPlugins/group/calico/ControllerPod 6.01
414 TestNetworkPlugins/group/calico/KubeletFlags 0.32
415 TestNetworkPlugins/group/calico/NetCatPod 10.4
416 TestNetworkPlugins/group/flannel/Start 49.54
417 TestNetworkPlugins/group/calico/DNS 0.1
418 TestNetworkPlugins/group/calico/Localhost 0.08
419 TestNetworkPlugins/group/calico/HairPin 0.09
420 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
421 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.2
422 TestNetworkPlugins/group/custom-flannel/DNS 0.12
423 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
424 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
425 TestNetworkPlugins/group/bridge/Start 37.97
427 TestStartStop/group/old-k8s-version/serial/FirstStart 52.5
428 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
429 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
430 TestNetworkPlugins/group/flannel/ControllerPod 6.01
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
435 TestNetworkPlugins/group/flannel/NetCatPod 9.2
436 TestNetworkPlugins/group/flannel/DNS 0.12
437 TestNetworkPlugins/group/flannel/Localhost 0.09
438 TestNetworkPlugins/group/flannel/HairPin 0.09
439 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
440 TestNetworkPlugins/group/bridge/NetCatPod 8.18
442 TestStartStop/group/no-preload/serial/FirstStart 51.07
443 TestNetworkPlugins/group/bridge/DNS 0.12
444 TestNetworkPlugins/group/bridge/Localhost 0.12
445 TestNetworkPlugins/group/bridge/HairPin 0.11
447 TestStartStop/group/embed-certs/serial/FirstStart 76.5
448 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
450 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.27
452 TestStartStop/group/old-k8s-version/serial/Stop 15.96
453 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
454 TestStartStop/group/old-k8s-version/serial/SecondStart 45.79
455 TestStartStop/group/no-preload/serial/DeployApp 8.22
457 TestStartStop/group/no-preload/serial/Stop 16.61
458 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.25
460 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.41
461 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
462 TestStartStop/group/no-preload/serial/SecondStart 46.02
463 TestStartStop/group/embed-certs/serial/DeployApp 7.28
464 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
465 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.83
466 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
468 TestStartStop/group/embed-certs/serial/Stop 18.21
469 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
470 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
473 TestStartStop/group/newest-cni/serial/FirstStart 29.98
474 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
475 TestStartStop/group/embed-certs/serial/SecondStart 45.99
476 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
477 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
478 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
480 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
481 TestStartStop/group/newest-cni/serial/DeployApp 0
483 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
484 TestStartStop/group/newest-cni/serial/Stop 2.64
485 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
486 TestStartStop/group/newest-cni/serial/SecondStart 10.7
487 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
489 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
490 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
491 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
493 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
494 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
495 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
x
+
TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.017126938s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1205 06:04:48.667655   16314 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1205 06:04:48.667741   16314 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-991192
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-991192: exit status 85 (65.495829ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-991192 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:43.700347   16326 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:43.700538   16326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:43.700550   16326 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:43.700557   16326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:43.700708   16326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	W1205 06:04:43.700814   16326 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-12758/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-12758/.minikube/config/config.json: no such file or directory
	I1205 06:04:43.701249   16326 out.go:368] Setting JSON to true
	I1205 06:04:43.702076   16326 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2828,"bootTime":1764911856,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:43.702123   16326 start.go:143] virtualization: kvm guest
	I1205 06:04:43.706951   16326 out.go:99] [download-only-991192] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1205 06:04:43.707078   16326 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 06:04:43.707131   16326 notify.go:221] Checking for updates...
	I1205 06:04:43.708253   16326 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:04:43.709374   16326 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:43.710463   16326 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:04:43.711529   16326 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:04:43.712558   16326 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 06:04:43.714692   16326 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:04:43.714856   16326 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:43.737378   16326 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:04:43.737444   16326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:43.970677   16326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-05 06:04:43.961430807 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:43.970774   16326 docker.go:319] overlay module found
	I1205 06:04:43.972182   16326 out.go:99] Using the docker driver based on user configuration
	I1205 06:04:43.972206   16326 start.go:309] selected driver: docker
	I1205 06:04:43.972211   16326 start.go:927] validating driver "docker" against <nil>
	I1205 06:04:43.972292   16326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:44.025776   16326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-05 06:04:44.017230172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:44.025950   16326 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:44.026436   16326 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1205 06:04:44.026595   16326 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:04:44.028194   16326 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-991192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-991192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-991192
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-402726 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-402726 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.932071568s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1205 06:04:53.010862   16314 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1205 06:04:53.010900   16314 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-402726
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-402726: exit status 85 (71.681084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-991192 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-991192                                                                                                                                                   │ download-only-991192 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ -o=json --download-only -p download-only-402726 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-402726 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:49.126822   16687 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:49.127459   16687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:49.127468   16687 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:49.127472   16687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:49.127667   16687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:04:49.128062   16687 out.go:368] Setting JSON to true
	I1205 06:04:49.128775   16687 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2833,"bootTime":1764911856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:49.128822   16687 start.go:143] virtualization: kvm guest
	I1205 06:04:49.130350   16687 out.go:99] [download-only-402726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:04:49.130502   16687 notify.go:221] Checking for updates...
	I1205 06:04:49.131611   16687 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:04:49.132828   16687 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:49.133957   16687 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:04:49.134944   16687 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:04:49.136077   16687 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 06:04:49.138185   16687 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:04:49.138460   16687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:49.161536   16687 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:04:49.161619   16687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:49.215200   16687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-05 06:04:49.205335231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:49.215289   16687 docker.go:319] overlay module found
	I1205 06:04:49.216772   16687 out.go:99] Using the docker driver based on user configuration
	I1205 06:04:49.216799   16687 start.go:309] selected driver: docker
	I1205 06:04:49.216805   16687 start.go:927] validating driver "docker" against <nil>
	I1205 06:04:49.216897   16687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:49.268337   16687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-05 06:04:49.25869649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:49.268539   16687 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:49.269027   16687 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1205 06:04:49.269174   16687 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:04:49.270715   16687 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-402726 host does not exist
	  To start a cluster, run: "minikube start -p download-only-402726"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-402726
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-500949 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-500949 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.199563216s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-500949
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-500949: exit status 85 (67.707386ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-991192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-991192 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-991192                                                                                                                                                          │ download-only-991192 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ -o=json --download-only -p download-only-402726 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-402726 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ delete  │ -p download-only-402726                                                                                                                                                          │ download-only-402726 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │ 05 Dec 25 06:04 UTC │
	│ start   │ -o=json --download-only -p download-only-500949 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-500949 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:53.476598   17045 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:53.476688   17045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:53.476698   17045 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:53.476704   17045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:53.476892   17045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:04:53.477346   17045 out.go:368] Setting JSON to true
	I1205 06:04:53.478089   17045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2837,"bootTime":1764911856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:53.478135   17045 start.go:143] virtualization: kvm guest
	I1205 06:04:53.479833   17045 out.go:99] [download-only-500949] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:04:53.480010   17045 notify.go:221] Checking for updates...
	I1205 06:04:53.481202   17045 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:04:53.482472   17045 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:53.483740   17045 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:04:53.484945   17045 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:04:53.486109   17045 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 06:04:53.488061   17045 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:04:53.488311   17045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:53.509172   17045 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:04:53.509279   17045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:53.561902   17045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:53.553253445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:53.562037   17045 docker.go:319] overlay module found
	I1205 06:04:53.563694   17045 out.go:99] Using the docker driver based on user configuration
	I1205 06:04:53.563733   17045 start.go:309] selected driver: docker
	I1205 06:04:53.563754   17045 start.go:927] validating driver "docker" against <nil>
	I1205 06:04:53.563860   17045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:04:53.614806   17045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-05 06:04:53.605875028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:04:53.614963   17045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:53.615457   17045 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1205 06:04:53.615598   17045 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:04:53.617239   17045 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-500949 host does not exist
	  To start a cluster, run: "minikube start -p download-only-500949"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-500949
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-737782 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-737782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-737782
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 06:04:56.949082   16314 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-565262 --alsologtostderr --binary-mirror http://127.0.0.1:40985 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-565262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-565262
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (53.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-314280 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-314280 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (46.582259537s)
helpers_test.go:175: Cleaning up "offline-crio-314280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-314280
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-314280: (6.920137779s)
--- PASS: TestOffline (53.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-177895
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-177895: exit status 85 (59.590003ms)

                                                
                                                
-- stdout --
	* Profile "addons-177895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-177895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-177895
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-177895: exit status 85 (58.602187ms)

                                                
                                                
-- stdout --
	* Profile "addons-177895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-177895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-177895 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-177895 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.075680513s)
--- PASS: TestAddons/Setup (123.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-177895 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-177895 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-177895 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-177895 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [815ba021-005d-4a49-9b68-12ac2d4fd4bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [815ba021-005d-4a49-9b68-12ac2d4fd4bc] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003089041s
addons_test.go:694: (dbg) Run:  kubectl --context addons-177895 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-177895 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-177895 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-177895
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-177895: (16.439754566s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-177895
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-177895
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-177895
--- PASS: TestAddons/StoppedEnableDisable (16.71s)

                                                
                                    
x
+
TestCertOptions (25.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-357047 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1205 07:01:07.795857   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-357047 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.841585082s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-357047 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-357047 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-357047 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-357047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-357047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-357047: (2.414374476s)
--- PASS: TestCertOptions (25.88s)

                                                
                                    
x
+
TestCertExpiration (209.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-825063 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-825063 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.540717695s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-825063 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-825063 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (4.586165456s)
helpers_test.go:175: Cleaning up "cert-expiration-825063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-825063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-825063: (2.380527452s)
--- PASS: TestCertExpiration (209.51s)

                                                
                                    
x
+
TestForceSystemdFlag (23.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-716893 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1205 06:58:04.729510   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-716893 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.586622824s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-716893 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-716893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-716893
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-716893: (2.3732428s)
--- PASS: TestForceSystemdFlag (23.25s)

                                                
                                    
x
+
TestForceSystemdEnv (33.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-435873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-435873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.833954006s)
helpers_test.go:175: Cleaning up "force-systemd-env-435873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-435873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-435873: (2.730844967s)
--- PASS: TestForceSystemdEnv (33.57s)

                                                
                                    
x
+
TestErrorSpam/setup (17.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-408306 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-408306 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-408306 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-408306 --driver=docker  --container-runtime=crio: (17.658481617s)
--- PASS: TestErrorSpam/setup (17.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause: exit status 80 (2.190520792s)

                                                
                                                
-- stdout --
	* Pausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause: exit status 80 (2.069823129s)

                                                
                                                
-- stdout --
	* Pausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause: exit status 80 (1.64205703s)

                                                
                                                
-- stdout --
	* Pausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause: exit status 80 (2.056323559s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause: exit status 80 (1.895667471s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause: exit status 80 (2.025972853s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-408306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:10:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.98s)

                                                
                                    
x
+
TestErrorSpam/stop (18.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 stop: (17.882710166s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408306 --log_dir /tmp/nospam-408306 stop
--- PASS: TestErrorSpam/stop (18.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/test/nested/copy/16314/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-882265 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m5.689366055s)
--- PASS: TestFunctional/serial/StartWithProxy (65.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 06:11:58.405047   16314 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --alsologtostderr -v=8
E1205 06:12:01.487945   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.494656   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.506036   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.527277   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.569077   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.650424   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:01.812074   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:02.134371   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:02.776821   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:04.058506   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-882265 --alsologtostderr -v=8: (5.991466358s)
functional_test.go:678: soft start took 5.992116597s for "functional-882265" cluster.
I1205 06:12:04.396937   16314 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-882265 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache add registry.k8s.io/pause:latest
E1205 06:12:06.619782   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-882265 /tmp/TestFunctionalserialCacheCmdcacheadd_local1845155221/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache add minikube-local-cache-test:functional-882265
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache delete minikube-local-cache-test:functional-882265
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-882265
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (271.87866ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 kubectl -- --context functional-882265 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-882265 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:12:11.741990   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:21.984066   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:12:42.465892   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-882265 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.634088463s)
functional_test.go:776: restart took 47.634200393s for "functional-882265" cluster.
I1205 06:12:58.260612   16314 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-882265 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 logs: (1.138666646s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 logs --file /tmp/TestFunctionalserialLogsFileCmd3955459396/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 logs --file /tmp/TestFunctionalserialLogsFileCmd3955459396/001/logs.txt: (1.164759461s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-882265 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-882265
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-882265: exit status 115 (329.30714ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31319 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-882265 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 config get cpus: exit status 14 (86.24551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 config get cpus: exit status 14 (71.670971ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882265 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882265 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 53236: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882265 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (150.722214ms)

                                                
                                                
-- stdout --
	* [functional-882265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:13:15.150783   52771 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:15.151034   52771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:15.151043   52771 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:15.151047   52771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:15.151276   52771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:13:15.151758   52771 out.go:368] Setting JSON to false
	I1205 06:13:15.152682   52771 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3339,"bootTime":1764911856,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:13:15.152732   52771 start.go:143] virtualization: kvm guest
	I1205 06:13:15.154593   52771 out.go:179] * [functional-882265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:13:15.156374   52771 notify.go:221] Checking for updates...
	I1205 06:13:15.156396   52771 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:13:15.157716   52771 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:13:15.159397   52771 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:13:15.160612   52771 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:13:15.161868   52771 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:13:15.163133   52771 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:13:15.164692   52771 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:15.165165   52771 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:13:15.187752   52771 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:13:15.187819   52771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:13:15.240665   52771 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:13:15.231687548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:13:15.240775   52771 docker.go:319] overlay module found
	I1205 06:13:15.242424   52771 out.go:179] * Using the docker driver based on existing profile
	I1205 06:13:15.243489   52771 start.go:309] selected driver: docker
	I1205 06:13:15.243499   52771 start.go:927] validating driver "docker" against &{Name:functional-882265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-882265 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:13:15.243571   52771 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:13:15.245084   52771 out.go:203] 
	W1205 06:13:15.246247   52771 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:13:15.247431   52771 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882265 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882265 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.844234ms)

                                                
                                                
-- stdout --
	* [functional-882265] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:13:06.957405   50085 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:06.957485   50085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:06.957491   50085 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:06.957497   50085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:06.957795   50085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:13:06.958157   50085 out.go:368] Setting JSON to false
	I1205 06:13:06.959028   50085 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3331,"bootTime":1764911856,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:13:06.959080   50085 start.go:143] virtualization: kvm guest
	I1205 06:13:06.960847   50085 out.go:179] * [functional-882265] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1205 06:13:06.962439   50085 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:13:06.962446   50085 notify.go:221] Checking for updates...
	I1205 06:13:06.963616   50085 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:13:06.964883   50085 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:13:06.966096   50085 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:13:06.967197   50085 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:13:06.968256   50085 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:13:06.969687   50085 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:06.970233   50085 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:13:06.994602   50085 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:13:06.994746   50085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:13:07.055810   50085 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:13:07.04533153 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:13:07.055923   50085 docker.go:319] overlay module found
	I1205 06:13:07.057543   50085 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:13:07.058512   50085 start.go:309] selected driver: docker
	I1205 06:13:07.058523   50085 start.go:927] validating driver "docker" against &{Name:functional-882265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-882265 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:13:07.058605   50085 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:13:07.060118   50085 out.go:203] 
	W1205 06:13:07.061105   50085 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:13:07.062186   50085 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [551af0ac-ecf3-4fd4-9ce6-f172cfc9227b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0030676s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-882265 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-882265 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-882265 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-882265 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:13:11.696920   16314 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d80916c3-0b83-4cb0-ad7a-5c4b9a608e99] Pending
helpers_test.go:352: "sp-pod" [d80916c3-0b83-4cb0-ad7a-5c4b9a608e99] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d80916c3-0b83-4cb0-ad7a-5c4b9a608e99] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004425045s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-882265 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-882265 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-882265 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:13:21.739818   16314 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0634cfab-44e2-4867-96f3-226406cd5c85] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/12/05 06:13:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [0634cfab-44e2-4867-96f3-226406cd5c85] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002900972s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-882265 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh -n functional-882265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cp functional-882265:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4002401588/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh -n functional-882265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh -n functional-882265 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-882265 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-x2kd8" [f94e552d-aee2-4bdf-90b7-421093f96f82] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-x2kd8" [f94e552d-aee2-4bdf-90b7-421093f96f82] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.002539731s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882265 exec mysql-5bb876957f-x2kd8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-882265 exec mysql-5bb876957f-x2kd8 -- mysql -ppassword -e "show databases;": exit status 1 (80.182011ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:13:44.351603   16314 retry.go:31] will retry after 1.249148072s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882265 exec mysql-5bb876957f-x2kd8 -- mysql -ppassword -e "show databases;"
E1205 06:14:45.349280   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:01.488637   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:29.191147   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:22:01.488381   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (16.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16314/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /etc/test/nested/copy/16314/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16314.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /etc/ssl/certs/16314.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16314.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /usr/share/ca-certificates/16314.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/163142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /etc/ssl/certs/163142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/163142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /usr/share/ca-certificates/163142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-882265 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "sudo systemctl is-active docker": exit status 1 (263.991565ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "sudo systemctl is-active containerd": exit status 1 (263.918502ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "396.790777ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.056882ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdany-port3570464239/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764915185626470955" to /tmp/TestFunctionalparallelMountCmdany-port3570464239/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764915185626470955" to /tmp/TestFunctionalparallelMountCmdany-port3570464239/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764915185626470955" to /tmp/TestFunctionalparallelMountCmdany-port3570464239/001/test-1764915185626470955
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.864502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:13:05.931711   16314 retry.go:31] will retry after 401.438371ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:13 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:13 test-1764915185626470955
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh cat /mount-9p/test-1764915185626470955
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-882265 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9e452ebe-6e7d-4169-b33c-2f9a460fbf89] Pending
helpers_test.go:352: "busybox-mount" [9e452ebe-6e7d-4169-b33c-2f9a460fbf89] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9e452ebe-6e7d-4169-b33c-2f9a460fbf89] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9e452ebe-6e7d-4169-b33c-2f9a460fbf89] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003314319s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-882265 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdany-port3570464239/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "359.721159ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.748108ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 50461: os: process already finished
helpers_test.go:525: unable to kill pid 50264: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-882265 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a935c539-1251-4211-8a96-bc6231e1c35b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a935c539-1251-4211-8a96-bc6231e1c35b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003189193s
I1205 06:13:15.752340   16314 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdspecific-port4130510110/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.925767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:13:11.557493   16314 retry.go:31] will retry after 641.129493ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdspecific-port4130510110/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "sudo umount -f /mount-9p": exit status 1 (335.359549ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-882265 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdspecific-port4130510110/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T" /mount1: exit status 1 (407.701202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:13:13.688453   16314 retry.go:31] will retry after 506.678848ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-882265 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185683990/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-882265 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.124.225 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-882265 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882265 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882265 image ls --format short --alsologtostderr:
I1205 06:13:29.999360   55506 out.go:360] Setting OutFile to fd 1 ...
I1205 06:13:29.999460   55506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:29.999468   55506 out.go:374] Setting ErrFile to fd 2...
I1205 06:13:29.999472   55506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:29.999664   55506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:13:30.000174   55506 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.000258   55506 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.000676   55506 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
I1205 06:13:30.018209   55506 ssh_runner.go:195] Run: systemctl --version
I1205 06:13:30.018258   55506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
I1205 06:13:30.034756   55506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
I1205 06:13:30.130101   55506 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882265 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882265 image ls --format table --alsologtostderr:
I1205 06:13:30.727824   55710 out.go:360] Setting OutFile to fd 1 ...
I1205 06:13:30.728068   55710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.728084   55710 out.go:374] Setting ErrFile to fd 2...
I1205 06:13:30.728090   55710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.728441   55710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:13:30.729402   55710 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.729553   55710 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.730189   55710 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
I1205 06:13:30.754858   55710 ssh_runner.go:195] Run: systemctl --version
I1205 06:13:30.754918   55710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
I1205 06:13:30.778569   55710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
I1205 06:13:30.889531   55710 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882265 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"r
epoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400
542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"15
5491845"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36
e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d80
3e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882265 image ls --format json --alsologtostderr:
I1205 06:13:30.448218   55636 out.go:360] Setting OutFile to fd 1 ...
I1205 06:13:30.448313   55636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.448333   55636 out.go:374] Setting ErrFile to fd 2...
I1205 06:13:30.448339   55636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.448571   55636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:13:30.449255   55636 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.449382   55636 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.449800   55636 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
I1205 06:13:30.475689   55636 ssh_runner.go:195] Run: systemctl --version
I1205 06:13:30.475849   55636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
I1205 06:13:30.497596   55636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
I1205 06:13:30.610066   55636 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882265 image ls --format yaml --alsologtostderr:
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882265 image ls --format yaml --alsologtostderr:
I1205 06:13:30.216215   55561 out.go:360] Setting OutFile to fd 1 ...
I1205 06:13:30.216341   55561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.216354   55561 out.go:374] Setting ErrFile to fd 2...
I1205 06:13:30.216360   55561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:30.216570   55561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:13:30.217082   55561 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.217188   55561 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:30.217573   55561 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
I1205 06:13:30.234913   55561 ssh_runner.go:195] Run: systemctl --version
I1205 06:13:30.234958   55561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
I1205 06:13:30.252086   55561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
I1205 06:13:30.350691   55561 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882265 ssh pgrep buildkitd: exit status 1 (340.985365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image build -t localhost/my-image:functional-882265 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 image build -t localhost/my-image:functional-882265 testdata/build --alsologtostderr: (5.420870841s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882265 image build -t localhost/my-image:functional-882265 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 12b809cd01a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-882265
--> 483b3810a65
Successfully tagged localhost/my-image:functional-882265
483b3810a65f4553028700c3591141797eb4904f2057181ecb4d3fade328bfc2
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882265 image build -t localhost/my-image:functional-882265 testdata/build --alsologtostderr:
I1205 06:13:31.349746   55861 out.go:360] Setting OutFile to fd 1 ...
I1205 06:13:31.350076   55861 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:31.350088   55861 out.go:374] Setting ErrFile to fd 2...
I1205 06:13:31.350094   55861 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:13:31.350386   55861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:13:31.351126   55861 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:31.351914   55861 config.go:182] Loaded profile config "functional-882265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:13:31.352565   55861 cli_runner.go:164] Run: docker container inspect functional-882265 --format={{.State.Status}}
I1205 06:13:31.375078   55861 ssh_runner.go:195] Run: systemctl --version
I1205 06:13:31.375127   55861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-882265
I1205 06:13:31.396285   55861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-882265/id_rsa Username:docker}
I1205 06:13:31.503443   55861 build_images.go:162] Building image from path: /tmp/build.133449248.tar
I1205 06:13:31.503508   55861 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:13:31.513413   55861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.133449248.tar
I1205 06:13:31.517192   55861 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.133449248.tar: stat -c "%s %y" /var/lib/minikube/build/build.133449248.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.133449248.tar': No such file or directory
I1205 06:13:31.517221   55861 ssh_runner.go:362] scp /tmp/build.133449248.tar --> /var/lib/minikube/build/build.133449248.tar (3072 bytes)
I1205 06:13:31.538170   55861 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.133449248
I1205 06:13:31.546772   55861 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.133449248 -xf /var/lib/minikube/build/build.133449248.tar
I1205 06:13:31.555845   55861 crio.go:315] Building image: /var/lib/minikube/build/build.133449248
I1205 06:13:31.555921   55861 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-882265 /var/lib/minikube/build/build.133449248 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 06:13:36.674055   55861 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-882265 /var/lib/minikube/build/build.133449248 --cgroup-manager=cgroupfs: (5.11809823s)
I1205 06:13:36.674134   55861 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.133449248
I1205 06:13:36.682298   55861 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.133449248.tar
I1205 06:13:36.689506   55861 build_images.go:218] Built localhost/my-image:functional-882265 from /tmp/build.133449248.tar
I1205 06:13:36.689535   55861 build_images.go:134] succeeded building to: functional-882265
I1205 06:13:36.689541   55861 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-882265
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image rm kicbase/echo-server:functional-882265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 service list: (1.688005673s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-882265 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-882265 service list -o json: (1.688466749s)
functional_test.go:1504: Took "1.688582766s" to run "out/minikube-linux-amd64 -p functional-882265 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-882265
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-882265
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-882265
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12758/.minikube/files/etc/test/nested/copy/16314/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (44.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-959058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.767910119s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (44.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1205 06:24:06.631598   16314 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-959058 --alsologtostderr -v=8: (6.013391921s)
functional_test.go:678: soft start took 6.01379719s for "functional-959058" cluster.
I1205 06:24:12.645358   16314 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-959058 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2780642586/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache add minikube-local-cache-test:functional-959058
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache delete minikube-local-cache-test:functional-959058
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-959058
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (271.545178ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 kubectl -- --context functional-959058 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-959058 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-959058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.757589231s)
functional_test.go:776: restart took 40.757712476s for "functional-959058" cluster.
I1205 06:24:59.352269   16314 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-959058 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 logs: (1.120513377s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3205949233/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3205949233/001/logs.txt: (1.135191352s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-959058 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-959058
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-959058: exit status 115 (329.240174ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32078 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-959058 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 config get cpus: exit status 14 (72.365744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 config get cpus: exit status 14 (71.558107ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959058 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959058 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 79731: os: process already finished
E1205 06:27:01.488121   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.730349   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.736695   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.748015   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.769353   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.810672   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:04.892012   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:05.053483   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:05.375691   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:06.017765   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:07.299295   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:09.861583   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:14.983364   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:24.553452   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:25.224970   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:45.706663   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:29:26.668523   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:48.590714   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:32:01.488561   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:33:04.730109   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:33:32.432582   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (153.23783ms)

                                                
                                                
-- stdout --
	* [functional-959058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:25:33.357090   79066 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:25:33.357212   79066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.357222   79066 out.go:374] Setting ErrFile to fd 2...
	I1205 06:25:33.357229   79066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.357448   79066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:25:33.357844   79066 out.go:368] Setting JSON to false
	I1205 06:25:33.358745   79066 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4077,"bootTime":1764911856,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:25:33.358797   79066 start.go:143] virtualization: kvm guest
	I1205 06:25:33.360465   79066 out.go:179] * [functional-959058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:25:33.361675   79066 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:25:33.361673   79066 notify.go:221] Checking for updates...
	I1205 06:25:33.363993   79066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:25:33.365183   79066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:25:33.366433   79066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:25:33.367639   79066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:25:33.368724   79066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:25:33.370314   79066 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:25:33.370995   79066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:25:33.396347   79066 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:25:33.396407   79066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:25:33.448466   79066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:25:33.438567406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:25:33.448580   79066 docker.go:319] overlay module found
	I1205 06:25:33.450233   79066 out.go:179] * Using the docker driver based on existing profile
	I1205 06:25:33.451412   79066 start.go:309] selected driver: docker
	I1205 06:25:33.451428   79066 start.go:927] validating driver "docker" against &{Name:functional-959058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959058 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:25:33.451506   79066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:25:33.452978   79066 out.go:203] 
	W1205 06:25:33.454046   79066 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:25:33.455035   79066 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-959058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (153.280097ms)

                                                
                                                
-- stdout --
	* [functional-959058] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:25:33.205913   78981 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:25:33.206002   78981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.206013   78981 out.go:374] Setting ErrFile to fd 2...
	I1205 06:25:33.206018   78981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:25:33.206309   78981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:25:33.206753   78981 out.go:368] Setting JSON to false
	I1205 06:25:33.207656   78981 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4077,"bootTime":1764911856,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:25:33.207703   78981 start.go:143] virtualization: kvm guest
	I1205 06:25:33.210081   78981 out.go:179] * [functional-959058] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1205 06:25:33.211228   78981 notify.go:221] Checking for updates...
	I1205 06:25:33.211248   78981 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:25:33.212371   78981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:25:33.213609   78981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:25:33.214804   78981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:25:33.215859   78981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:25:33.216925   78981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:25:33.218483   78981 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:25:33.218939   78981 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:25:33.243086   78981 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:25:33.243224   78981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:25:33.294406   78981 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-05 06:25:33.285252524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:25:33.294560   78981 docker.go:319] overlay module found
	I1205 06:25:33.296073   78981 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:25:33.297298   78981 start.go:309] selected driver: docker
	I1205 06:25:33.297312   78981 start.go:927] validating driver "docker" against &{Name:functional-959058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959058 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:25:33.297428   78981 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:25:33.299138   78981 out.go:203] 
	W1205 06:25:33.300284   78981 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:25:33.301446   78981 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [66747506-a4b9-4cae-9b16-5035b278765c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004625835s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-959058 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-959058 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-959058 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-959058 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:25:14.596061   16314 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4bbb2a38-6156-49a1-a4a8-a1b7b8285f5e] Pending
helpers_test.go:352: "sp-pod" [4bbb2a38-6156-49a1-a4a8-a1b7b8285f5e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4bbb2a38-6156-49a1-a4a8-a1b7b8285f5e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003816477s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-959058 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-959058 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-959058 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:25:25.669982   16314 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [644fd04d-fb0d-4ffd-85c9-bd39b04a3853] Pending
helpers_test.go:352: "sp-pod" [644fd04d-fb0d-4ffd-85c9-bd39b04a3853] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [644fd04d-fb0d-4ffd-85c9-bd39b04a3853] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002961232s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-959058 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh -n functional-959058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cp functional-959058:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3201623336/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh -n functional-959058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh -n functional-959058 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-959058 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-j8fqk" [a2d041c5-346a-4e0b-88c0-8d244512849d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-j8fqk" [a2d041c5-346a-4e0b-88c0-8d244512849d] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.00299221s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-959058 exec mysql-844cf969f6-j8fqk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-959058 exec mysql-844cf969f6-j8fqk -- mysql -ppassword -e "show databases;": exit status 1 (86.022709ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:25:20.420454   16314 retry.go:31] will retry after 1.033053356s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-959058 exec mysql-844cf969f6-j8fqk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16314/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /etc/test/nested/copy/16314/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16314.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /etc/ssl/certs/16314.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16314.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /usr/share/ca-certificates/16314.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/163142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /etc/ssl/certs/163142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/163142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /usr/share/ca-certificates/163142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-959058 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "sudo systemctl is-active docker": exit status 1 (302.908126ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "sudo systemctl is-active containerd": exit status 1 (313.902091ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959058 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959058 image ls --format short --alsologtostderr:
I1205 06:25:34.695089   79725 out.go:360] Setting OutFile to fd 1 ...
I1205 06:25:34.695355   79725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:34.695364   79725 out.go:374] Setting ErrFile to fd 2...
I1205 06:25:34.695378   79725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:34.695651   79725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:25:34.696233   79725 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:34.696348   79725 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:34.696966   79725 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
I1205 06:25:34.715275   79725 ssh_runner.go:195] Run: systemctl --version
I1205 06:25:34.715351   79725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
I1205 06:25:34.734592   79725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
I1205 06:25:34.833885   79725 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959058 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959058 image ls --format table --alsologtostderr:
I1205 06:25:35.143904   79888 out.go:360] Setting OutFile to fd 1 ...
I1205 06:25:35.144002   79888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.144012   79888 out.go:374] Setting ErrFile to fd 2...
I1205 06:25:35.144019   79888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.144184   79888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:25:35.144695   79888 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.144809   79888 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.145219   79888 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
I1205 06:25:35.162608   79888 ssh_runner.go:195] Run: systemctl --version
I1205 06:25:35.162652   79888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
I1205 06:25:35.179394   79888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
I1205 06:25:35.275194   79888 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959058 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59
dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"s
ize":"71976228"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9c
b00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:
667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959058 image ls --format json --alsologtostderr:
I1205 06:25:34.925676   79793 out.go:360] Setting OutFile to fd 1 ...
I1205 06:25:34.925952   79793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:34.925966   79793 out.go:374] Setting ErrFile to fd 2...
I1205 06:25:34.925971   79793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:34.926261   79793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:25:34.926995   79793 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:34.927138   79793 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:34.927668   79793 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
I1205 06:25:34.945842   79793 ssh_runner.go:195] Run: systemctl --version
I1205 06:25:34.945878   79793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
I1205 06:25:34.962510   79793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
I1205 06:25:35.059408   79793 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959058 image ls --format yaml --alsologtostderr:
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959058 image ls --format yaml --alsologtostderr:
I1205 06:25:35.357728   79940 out.go:360] Setting OutFile to fd 1 ...
I1205 06:25:35.357812   79940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.357820   79940 out.go:374] Setting ErrFile to fd 2...
I1205 06:25:35.357824   79940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.357998   79940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:25:35.358474   79940 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.358564   79940 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.359177   79940 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
I1205 06:25:35.376939   79940 ssh_runner.go:195] Run: systemctl --version
I1205 06:25:35.376979   79940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
I1205 06:25:35.392929   79940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
I1205 06:25:35.489164   79940 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh pgrep buildkitd: exit status 1 (299.273715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image build -t localhost/my-image:functional-959058 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 image build -t localhost/my-image:functional-959058 testdata/build --alsologtostderr: (1.720155042s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959058 image build -t localhost/my-image:functional-959058 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2cc27a7e802
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-959058
--> 4c03d7c365a
Successfully tagged localhost/my-image:functional-959058
4c03d7c365ab2aa8759ad1619796c88b5888edf5d0ee0cf8f267170340db4799
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959058 image build -t localhost/my-image:functional-959058 testdata/build --alsologtostderr:
I1205 06:25:35.881455   80109 out.go:360] Setting OutFile to fd 1 ...
I1205 06:25:35.881754   80109 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.881766   80109 out.go:374] Setting ErrFile to fd 2...
I1205 06:25:35.881772   80109 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:25:35.881977   80109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
I1205 06:25:35.882638   80109 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.883183   80109 config.go:182] Loaded profile config "functional-959058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:25:35.883642   80109 cli_runner.go:164] Run: docker container inspect functional-959058 --format={{.State.Status}}
I1205 06:25:35.905503   80109 ssh_runner.go:195] Run: systemctl --version
I1205 06:25:35.905556   80109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-959058
I1205 06:25:35.926364   80109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/functional-959058/id_rsa Username:docker}
I1205 06:25:36.033752   80109 build_images.go:162] Building image from path: /tmp/build.3692436812.tar
I1205 06:25:36.033811   80109 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:25:36.044226   80109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3692436812.tar
I1205 06:25:36.048855   80109 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3692436812.tar: stat -c "%s %y" /var/lib/minikube/build/build.3692436812.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3692436812.tar': No such file or directory
I1205 06:25:36.048884   80109 ssh_runner.go:362] scp /tmp/build.3692436812.tar --> /var/lib/minikube/build/build.3692436812.tar (3072 bytes)
I1205 06:25:36.068588   80109 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3692436812
I1205 06:25:36.076589   80109 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3692436812 -xf /var/lib/minikube/build/build.3692436812.tar
I1205 06:25:36.084207   80109 crio.go:315] Building image: /var/lib/minikube/build/build.3692436812
I1205 06:25:36.084258   80109 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-959058 /var/lib/minikube/build/build.3692436812 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 06:25:37.513416   80109 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-959058 /var/lib/minikube/build/build.3692436812 --cgroup-manager=cgroupfs: (1.429128772s)
I1205 06:25:37.513484   80109 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3692436812
I1205 06:25:37.523795   80109 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3692436812.tar
I1205 06:25:37.532495   80109 build_images.go:218] Built localhost/my-image:functional-959058 from /tmp/build.3692436812.tar
I1205 06:25:37.532537   80109 build_images.go:134] succeeded building to: functional-959058
I1205 06:25:37.532544   80109 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-959058
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 74101: os: process already finished
helpers_test.go:519: unable to terminate pid 73914: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-959058 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [dcb6dbde-d27a-488f-8d4b-955cf3068659] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [dcb6dbde-d27a-488f-8d4b-955cf3068659] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.002510618s
I1205 06:25:21.900901   16314 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image rm kicbase/echo-server:functional-959058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-959058 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.135.90 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-959058 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "332.837453ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.245658ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "324.074269ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "91.847002ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2270493858/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764915923260595611" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2270493858/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764915923260595611" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2270493858/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764915923260595611" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2270493858/001/test-1764915923260595611
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.832776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:25:23.534658   16314 retry.go:31] will retry after 458.16282ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:25 test-1764915923260595611
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh cat /mount-9p/test-1764915923260595611
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-959058 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f91dcd0d-0deb-40e7-9983-ddbf1c98228d] Pending
helpers_test.go:352: "busybox-mount" [f91dcd0d-0deb-40e7-9983-ddbf1c98228d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f91dcd0d-0deb-40e7-9983-ddbf1c98228d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f91dcd0d-0deb-40e7-9983-ddbf1c98228d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002965531s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-959058 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2270493858/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4185747905/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.144189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:25:29.156262   16314 retry.go:31] will retry after 413.439477ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4185747905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "sudo umount -f /mount-9p": exit status 1 (259.730497ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-959058 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4185747905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T" /mount1: exit status 1 (322.260149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:25:30.883749   16314 retry.go:31] will retry after 537.582032ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-959058 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959058 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3991546214/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 version -o=json --components
2025/12/05 06:25:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 service list: (1.686722867s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-959058 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-959058 service list -o json: (1.686517585s)
functional_test.go:1504: Took "1.686610586s" to run "out/minikube-linux-amd64 -p functional-959058 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-959058
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-959058
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-959058
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (134.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1205 06:37:01.488725   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m13.465297689s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (134.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 kubectl -- rollout status deployment/busybox: (1.838110794s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-7snv6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-8xt6b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-tbhpv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-7snv6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-8xt6b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-tbhpv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-7snv6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-8xt6b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-tbhpv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-7snv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-7snv6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-8xt6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-8xt6b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-tbhpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 kubectl -- exec busybox-7b57f96db7-tbhpv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node add --alsologtostderr -v 5
E1205 06:38:04.729514   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 node add --alsologtostderr -v 5: (55.608912855s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-149311 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp testdata/cp-test.txt ha-149311:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4001876251/001/cp-test_ha-149311.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311:/home/docker/cp-test.txt ha-149311-m02:/home/docker/cp-test_ha-149311_ha-149311-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test_ha-149311_ha-149311-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311:/home/docker/cp-test.txt ha-149311-m03:/home/docker/cp-test_ha-149311_ha-149311-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test_ha-149311_ha-149311-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311:/home/docker/cp-test.txt ha-149311-m04:/home/docker/cp-test_ha-149311_ha-149311-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test_ha-149311_ha-149311-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp testdata/cp-test.txt ha-149311-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4001876251/001/cp-test_ha-149311-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m02:/home/docker/cp-test.txt ha-149311:/home/docker/cp-test_ha-149311-m02_ha-149311.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test_ha-149311-m02_ha-149311.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m02:/home/docker/cp-test.txt ha-149311-m03:/home/docker/cp-test_ha-149311-m02_ha-149311-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test_ha-149311-m02_ha-149311-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m02:/home/docker/cp-test.txt ha-149311-m04:/home/docker/cp-test_ha-149311-m02_ha-149311-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test_ha-149311-m02_ha-149311-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp testdata/cp-test.txt ha-149311-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4001876251/001/cp-test_ha-149311-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m03:/home/docker/cp-test.txt ha-149311:/home/docker/cp-test_ha-149311-m03_ha-149311.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test_ha-149311-m03_ha-149311.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m03:/home/docker/cp-test.txt ha-149311-m02:/home/docker/cp-test_ha-149311-m03_ha-149311-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test_ha-149311-m03_ha-149311-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m03:/home/docker/cp-test.txt ha-149311-m04:/home/docker/cp-test_ha-149311-m03_ha-149311-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test_ha-149311-m03_ha-149311-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp testdata/cp-test.txt ha-149311-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4001876251/001/cp-test_ha-149311-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m04:/home/docker/cp-test.txt ha-149311:/home/docker/cp-test_ha-149311-m04_ha-149311.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311 "sudo cat /home/docker/cp-test_ha-149311-m04_ha-149311.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m04:/home/docker/cp-test.txt ha-149311-m02:/home/docker/cp-test_ha-149311-m04_ha-149311-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m02 "sudo cat /home/docker/cp-test_ha-149311-m04_ha-149311-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 cp ha-149311-m04:/home/docker/cp-test.txt ha-149311-m03:/home/docker/cp-test_ha-149311-m04_ha-149311-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 ssh -n ha-149311-m03 "sudo cat /home/docker/cp-test_ha-149311-m04_ha-149311-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (9.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 node stop m02 --alsologtostderr -v 5: (8.954867193s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5: exit status 7 (661.373493ms)

                                                
                                                
-- stdout --
	ha-149311
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-149311-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-149311-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-149311-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:39:14.069635  104794 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:14.069893  104794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:14.069904  104794 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:14.069909  104794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:14.070120  104794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:39:14.070298  104794 out.go:368] Setting JSON to false
	I1205 06:39:14.070335  104794 mustload.go:66] Loading cluster: ha-149311
	I1205 06:39:14.070430  104794 notify.go:221] Checking for updates...
	I1205 06:39:14.070896  104794 config.go:182] Loaded profile config "ha-149311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:39:14.070916  104794 status.go:174] checking status of ha-149311 ...
	I1205 06:39:14.071384  104794 cli_runner.go:164] Run: docker container inspect ha-149311 --format={{.State.Status}}
	I1205 06:39:14.089857  104794 status.go:371] ha-149311 host status = "Running" (err=<nil>)
	I1205 06:39:14.089878  104794 host.go:66] Checking if "ha-149311" exists ...
	I1205 06:39:14.090155  104794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-149311
	I1205 06:39:14.108071  104794 host.go:66] Checking if "ha-149311" exists ...
	I1205 06:39:14.108349  104794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:39:14.108392  104794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-149311
	I1205 06:39:14.126350  104794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/ha-149311/id_rsa Username:docker}
	I1205 06:39:14.222148  104794 ssh_runner.go:195] Run: systemctl --version
	I1205 06:39:14.228274  104794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:39:14.239577  104794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:14.291055  104794 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-05 06:39:14.281834489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:39:14.291553  104794 kubeconfig.go:125] found "ha-149311" server: "https://192.168.49.254:8443"
	I1205 06:39:14.291580  104794 api_server.go:166] Checking apiserver status ...
	I1205 06:39:14.291618  104794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:39:14.302810  104794 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1265/cgroup
	W1205 06:39:14.310493  104794 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1265/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:39:14.310537  104794 ssh_runner.go:195] Run: ls
	I1205 06:39:14.313870  104794 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 06:39:14.317760  104794 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 06:39:14.317778  104794 status.go:463] ha-149311 apiserver status = Running (err=<nil>)
	I1205 06:39:14.317786  104794 status.go:176] ha-149311 status: &{Name:ha-149311 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:39:14.317800  104794 status.go:174] checking status of ha-149311-m02 ...
	I1205 06:39:14.318008  104794 cli_runner.go:164] Run: docker container inspect ha-149311-m02 --format={{.State.Status}}
	I1205 06:39:14.334982  104794 status.go:371] ha-149311-m02 host status = "Stopped" (err=<nil>)
	I1205 06:39:14.334996  104794 status.go:384] host is not running, skipping remaining checks
	I1205 06:39:14.335002  104794 status.go:176] ha-149311-m02 status: &{Name:ha-149311-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:39:14.335017  104794 status.go:174] checking status of ha-149311-m03 ...
	I1205 06:39:14.335241  104794 cli_runner.go:164] Run: docker container inspect ha-149311-m03 --format={{.State.Status}}
	I1205 06:39:14.351969  104794 status.go:371] ha-149311-m03 host status = "Running" (err=<nil>)
	I1205 06:39:14.351992  104794 host.go:66] Checking if "ha-149311-m03" exists ...
	I1205 06:39:14.352279  104794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-149311-m03
	I1205 06:39:14.368784  104794 host.go:66] Checking if "ha-149311-m03" exists ...
	I1205 06:39:14.369066  104794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:39:14.369111  104794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-149311-m03
	I1205 06:39:14.385737  104794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/ha-149311-m03/id_rsa Username:docker}
	I1205 06:39:14.479957  104794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:39:14.492224  104794 kubeconfig.go:125] found "ha-149311" server: "https://192.168.49.254:8443"
	I1205 06:39:14.492250  104794 api_server.go:166] Checking apiserver status ...
	I1205 06:39:14.492293  104794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:39:14.502679  104794 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	W1205 06:39:14.510271  104794 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:39:14.510314  104794 ssh_runner.go:195] Run: ls
	I1205 06:39:14.513557  104794 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 06:39:14.517431  104794 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 06:39:14.517452  104794 status.go:463] ha-149311-m03 apiserver status = Running (err=<nil>)
	I1205 06:39:14.517462  104794 status.go:176] ha-149311-m03 status: &{Name:ha-149311-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:39:14.517486  104794 status.go:174] checking status of ha-149311-m04 ...
	I1205 06:39:14.517704  104794 cli_runner.go:164] Run: docker container inspect ha-149311-m04 --format={{.State.Status}}
	I1205 06:39:14.534751  104794 status.go:371] ha-149311-m04 host status = "Running" (err=<nil>)
	I1205 06:39:14.534769  104794 host.go:66] Checking if "ha-149311-m04" exists ...
	I1205 06:39:14.535035  104794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-149311-m04
	I1205 06:39:14.550255  104794 host.go:66] Checking if "ha-149311-m04" exists ...
	I1205 06:39:14.550478  104794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:39:14.550520  104794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-149311-m04
	I1205 06:39:14.566466  104794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/ha-149311-m04/id_rsa Username:docker}
	I1205 06:39:14.660923  104794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:39:14.672601  104794 status.go:176] ha-149311-m04 status: &{Name:ha-149311-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (9.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 node start m02 --alsologtostderr -v 5: (7.364535413s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 stop --alsologtostderr -v 5: (33.730841574s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 start --wait true --alsologtostderr -v 5
E1205 06:40:06.331770   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.338144   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.349473   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.370809   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.412139   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.493524   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.655090   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:06.976806   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:07.618818   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:08.900893   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:11.462752   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:16.584099   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:26.826347   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:47.307772   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 start --wait true --alsologtostderr -v 5: (57.639506247s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 node delete m03 --alsologtostderr -v 5: (9.652813999s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 stop --alsologtostderr -v 5
E1205 06:41:28.269396   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 stop --alsologtostderr -v 5: (37.820138252s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5: exit status 7 (113.366038ms)

                                                
                                                
-- stdout --
	ha-149311
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-149311-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-149311-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:41:44.987391  118760 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:41:44.987476  118760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:41:44.987483  118760 out.go:374] Setting ErrFile to fd 2...
	I1205 06:41:44.987493  118760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:41:44.987663  118760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:41:44.987817  118760 out.go:368] Setting JSON to false
	I1205 06:41:44.987839  118760 mustload.go:66] Loading cluster: ha-149311
	I1205 06:41:44.987909  118760 notify.go:221] Checking for updates...
	I1205 06:41:44.988215  118760 config.go:182] Loaded profile config "ha-149311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:41:44.988231  118760 status.go:174] checking status of ha-149311 ...
	I1205 06:41:44.988850  118760 cli_runner.go:164] Run: docker container inspect ha-149311 --format={{.State.Status}}
	I1205 06:41:45.009732  118760 status.go:371] ha-149311 host status = "Stopped" (err=<nil>)
	I1205 06:41:45.009772  118760 status.go:384] host is not running, skipping remaining checks
	I1205 06:41:45.009785  118760 status.go:176] ha-149311 status: &{Name:ha-149311 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:41:45.009817  118760 status.go:174] checking status of ha-149311-m02 ...
	I1205 06:41:45.010062  118760 cli_runner.go:164] Run: docker container inspect ha-149311-m02 --format={{.State.Status}}
	I1205 06:41:45.026458  118760 status.go:371] ha-149311-m02 host status = "Stopped" (err=<nil>)
	I1205 06:41:45.026474  118760 status.go:384] host is not running, skipping remaining checks
	I1205 06:41:45.026481  118760 status.go:176] ha-149311-m02 status: &{Name:ha-149311-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:41:45.026499  118760 status.go:174] checking status of ha-149311-m04 ...
	I1205 06:41:45.026703  118760 cli_runner.go:164] Run: docker container inspect ha-149311-m04 --format={{.State.Status}}
	I1205 06:41:45.042545  118760 status.go:371] ha-149311-m04 host status = "Stopped" (err=<nil>)
	I1205 06:41:45.042581  118760 status.go:384] host is not running, skipping remaining checks
	I1205 06:41:45.042593  118760 status.go:176] ha-149311-m04 status: &{Name:ha-149311-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1205 06:42:01.488219   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (59.875976374s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 node add --control-plane --alsologtostderr -v 5
E1205 06:42:50.191059   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:43:04.732524   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-149311 node add --control-plane --alsologtostderr -v 5: (37.697600458s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-149311 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-101450 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-101450 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.567923196s)
--- PASS: TestJSONOutput/start/Command (39.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-101450 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-101450 --output=json --user=testUser: (7.904501754s)
--- PASS: TestJSONOutput/stop/Command (7.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-130677 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-130677 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.163196ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"919949de-1fb1-45f6-9523-89eb2c59d7bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-130677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67bff435-c30e-4cc6-9f40-3c7018051aa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"2af32f49-bcb6-4a37-9db5-b3413550bbaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9484ae35-093e-4332-9721-83f8d9f8117a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig"}}
	{"specversion":"1.0","id":"761fcd88-d382-49ec-aebf-d1ed8a5b42b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube"}}
	{"specversion":"1.0","id":"1ff5004a-0693-4919-a338-09249fa9bc50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"38363b61-fa35-4e89-bab8-03fbd812c93a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8ff6c27b-5530-4a0f-8e69-d30dd6fe5629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-130677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-130677
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-912924 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-912924 --network=: (24.561729932s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-912924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-912924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-912924: (2.07599368s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-907448 --network=bridge
E1205 06:45:04.557567   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:45:06.331894   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-907448 --network=bridge: (19.116283976s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-907448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-907448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-907448: (1.959477742s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.09s)

                                                
                                    
x
+
TestKicExistingNetwork (21.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 06:45:17.792665   16314 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 06:45:17.808753   16314 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 06:45:17.808815   16314 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 06:45:17.808829   16314 cli_runner.go:164] Run: docker network inspect existing-network
W1205 06:45:17.824209   16314 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 06:45:17.824234   16314 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 06:45:17.824251   16314 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 06:45:17.824387   16314 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 06:45:17.840331   16314 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d57cb024a629 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ab:20:17:d9:1a} reservation:<nil>}
I1205 06:45:17.840750   16314 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ff690}
I1205 06:45:17.840785   16314 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 06:45:17.840834   16314 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 06:45:17.887030   16314 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-052258 --network=existing-network
E1205 06:45:34.033543   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-052258 --network=existing-network: (19.658374557s)
helpers_test.go:175: Cleaning up "existing-network-052258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-052258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-052258: (1.960912101s)
I1205 06:45:39.523662   16314 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (21.74s)

                                                
                                    
x
+
TestKicCustomSubnet (26.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-279366 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-279366 --subnet=192.168.60.0/24: (24.208506546s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-279366 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-279366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-279366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-279366: (2.093164724s)
--- PASS: TestKicCustomSubnet (26.32s)

                                                
                                    
x
+
TestKicStaticIP (21.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-138748 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-138748 --static-ip=192.168.200.200: (19.706218913s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-138748 ip
helpers_test.go:175: Cleaning up "static-ip-138748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-138748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-138748: (2.096593604s)
--- PASS: TestKicStaticIP (21.94s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-843829 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-843829 --driver=docker  --container-runtime=crio: (23.007431321s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-847308 --driver=docker  --container-runtime=crio
E1205 06:47:01.490448   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-847308 --driver=docker  --container-runtime=crio: (19.980744548s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-843829
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-847308
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-847308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-847308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-847308: (2.286476331s)
helpers_test.go:175: Cleaning up "first-843829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-843829
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-843829: (2.282687059s)
--- PASS: TestMinikubeProfile (48.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-354381 --memory=3072 --mount-string /tmp/TestMountStartserial313299451/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-354381 --memory=3072 --mount-string /tmp/TestMountStartserial313299451/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.572862668s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-354381 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-368906 --memory=3072 --mount-string /tmp/TestMountStartserial313299451/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-368906 --memory=3072 --mount-string /tmp/TestMountStartserial313299451/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.688840256s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368906 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-354381 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-354381 --alsologtostderr -v=5: (1.649274994s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368906 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-368906
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-368906: (1.234598061s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-368906
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-368906: (6.20046158s)
--- PASS: TestMountStart/serial/RestartStopped (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368906 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-256345 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 06:48:04.730141   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-256345 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.299039799s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-256345 -- rollout status deployment/busybox: (2.081328522s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-8868p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-f48gl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-8868p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-f48gl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-8868p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-f48gl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-8868p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-8868p -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-f48gl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-256345 -- exec busybox-7b57f96db7-f48gl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-256345 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-256345 -v=5 --alsologtostderr: (22.338270623s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-256345 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp testdata/cp-test.txt multinode-256345:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879745855/001/cp-test_multinode-256345.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345:/home/docker/cp-test.txt multinode-256345-m02:/home/docker/cp-test_multinode-256345_multinode-256345-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test_multinode-256345_multinode-256345-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345:/home/docker/cp-test.txt multinode-256345-m03:/home/docker/cp-test_multinode-256345_multinode-256345-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test_multinode-256345_multinode-256345-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp testdata/cp-test.txt multinode-256345-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879745855/001/cp-test_multinode-256345-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m02:/home/docker/cp-test.txt multinode-256345:/home/docker/cp-test_multinode-256345-m02_multinode-256345.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test_multinode-256345-m02_multinode-256345.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m02:/home/docker/cp-test.txt multinode-256345-m03:/home/docker/cp-test_multinode-256345-m02_multinode-256345-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test_multinode-256345-m02_multinode-256345-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp testdata/cp-test.txt multinode-256345-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879745855/001/cp-test_multinode-256345-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m03:/home/docker/cp-test.txt multinode-256345:/home/docker/cp-test_multinode-256345-m03_multinode-256345.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345 "sudo cat /home/docker/cp-test_multinode-256345-m03_multinode-256345.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 cp multinode-256345-m03:/home/docker/cp-test.txt multinode-256345-m02:/home/docker/cp-test_multinode-256345-m03_multinode-256345-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 ssh -n multinode-256345-m02 "sudo cat /home/docker/cp-test_multinode-256345-m03_multinode-256345-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-256345 node stop m03: (1.254706309s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-256345 status: exit status 7 (473.207192ms)

                                                
                                                
-- stdout --
	multinode-256345
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-256345-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-256345-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr: exit status 7 (473.267759ms)

                                                
                                                
-- stdout --
	multinode-256345
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-256345-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-256345-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:49:53.539316  178752 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:49:53.539417  178752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:49:53.539428  178752 out.go:374] Setting ErrFile to fd 2...
	I1205 06:49:53.539434  178752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:49:53.539635  178752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:49:53.539832  178752 out.go:368] Setting JSON to false
	I1205 06:49:53.539857  178752 mustload.go:66] Loading cluster: multinode-256345
	I1205 06:49:53.539941  178752 notify.go:221] Checking for updates...
	I1205 06:49:53.540303  178752 config.go:182] Loaded profile config "multinode-256345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:49:53.540318  178752 status.go:174] checking status of multinode-256345 ...
	I1205 06:49:53.540737  178752 cli_runner.go:164] Run: docker container inspect multinode-256345 --format={{.State.Status}}
	I1205 06:49:53.558619  178752 status.go:371] multinode-256345 host status = "Running" (err=<nil>)
	I1205 06:49:53.558643  178752 host.go:66] Checking if "multinode-256345" exists ...
	I1205 06:49:53.558874  178752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-256345
	I1205 06:49:53.574972  178752 host.go:66] Checking if "multinode-256345" exists ...
	I1205 06:49:53.575212  178752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:49:53.575253  178752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-256345
	I1205 06:49:53.591604  178752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/multinode-256345/id_rsa Username:docker}
	I1205 06:49:53.686854  178752 ssh_runner.go:195] Run: systemctl --version
	I1205 06:49:53.692801  178752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:49:53.703970  178752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:49:53.759464  178752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-05 06:49:53.749636325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:49:53.759982  178752 kubeconfig.go:125] found "multinode-256345" server: "https://192.168.67.2:8443"
	I1205 06:49:53.760007  178752 api_server.go:166] Checking apiserver status ...
	I1205 06:49:53.760042  178752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:53.770937  178752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	W1205 06:49:53.778661  178752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:49:53.778709  178752 ssh_runner.go:195] Run: ls
	I1205 06:49:53.781974  178752 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1205 06:49:53.785795  178752 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1205 06:49:53.785816  178752 status.go:463] multinode-256345 apiserver status = Running (err=<nil>)
	I1205 06:49:53.785825  178752 status.go:176] multinode-256345 status: &{Name:multinode-256345 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:49:53.785838  178752 status.go:174] checking status of multinode-256345-m02 ...
	I1205 06:49:53.786102  178752 cli_runner.go:164] Run: docker container inspect multinode-256345-m02 --format={{.State.Status}}
	I1205 06:49:53.803145  178752 status.go:371] multinode-256345-m02 host status = "Running" (err=<nil>)
	I1205 06:49:53.803161  178752 host.go:66] Checking if "multinode-256345-m02" exists ...
	I1205 06:49:53.803411  178752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-256345-m02
	I1205 06:49:53.819551  178752 host.go:66] Checking if "multinode-256345-m02" exists ...
	I1205 06:49:53.819765  178752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:49:53.819796  178752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-256345-m02
	I1205 06:49:53.835207  178752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21997-12758/.minikube/machines/multinode-256345-m02/id_rsa Username:docker}
	I1205 06:49:53.928784  178752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:49:53.940153  178752 status.go:176] multinode-256345-m02 status: &{Name:multinode-256345-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:49:53.940179  178752 status.go:174] checking status of multinode-256345-m03 ...
	I1205 06:49:53.940463  178752 cli_runner.go:164] Run: docker container inspect multinode-256345-m03 --format={{.State.Status}}
	I1205 06:49:53.957080  178752 status.go:371] multinode-256345-m03 host status = "Stopped" (err=<nil>)
	I1205 06:49:53.957097  178752 status.go:384] host is not running, skipping remaining checks
	I1205 06:49:53.957102  178752 status.go:176] multinode-256345-m03 status: &{Name:multinode-256345-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-256345 node start m03 -v=5 --alsologtostderr: (6.350297482s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-256345
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-256345
E1205 06:50:06.331455   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-256345: (29.413318734s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-256345 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-256345 --wait=true -v=5 --alsologtostderr: (49.833435316s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-256345
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-256345 node delete m03: (4.593114138s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-256345 stop: (28.254231036s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-256345 status: exit status 7 (93.491297ms)

                                                
                                                
-- stdout --
	multinode-256345
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-256345-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr: exit status 7 (94.01905ms)

                                                
                                                
-- stdout --
	multinode-256345
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-256345-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:51:53.933569  188621 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:51:53.933824  188621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:51:53.933833  188621 out.go:374] Setting ErrFile to fd 2...
	I1205 06:51:53.933837  188621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:51:53.934024  188621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:51:53.934176  188621 out.go:368] Setting JSON to false
	I1205 06:51:53.934197  188621 mustload.go:66] Loading cluster: multinode-256345
	I1205 06:51:53.934363  188621 notify.go:221] Checking for updates...
	I1205 06:51:53.934575  188621 config.go:182] Loaded profile config "multinode-256345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:51:53.934590  188621 status.go:174] checking status of multinode-256345 ...
	I1205 06:51:53.934977  188621 cli_runner.go:164] Run: docker container inspect multinode-256345 --format={{.State.Status}}
	I1205 06:51:53.951606  188621 status.go:371] multinode-256345 host status = "Stopped" (err=<nil>)
	I1205 06:51:53.951631  188621 status.go:384] host is not running, skipping remaining checks
	I1205 06:51:53.951639  188621 status.go:176] multinode-256345 status: &{Name:multinode-256345 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:51:53.951661  188621 status.go:174] checking status of multinode-256345-m02 ...
	I1205 06:51:53.951934  188621 cli_runner.go:164] Run: docker container inspect multinode-256345-m02 --format={{.State.Status}}
	I1205 06:51:53.967936  188621 status.go:371] multinode-256345-m02 host status = "Stopped" (err=<nil>)
	I1205 06:51:53.967968  188621 status.go:384] host is not running, skipping remaining checks
	I1205 06:51:53.967977  188621 status.go:176] multinode-256345-m02 status: &{Name:multinode-256345-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (25.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-256345 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 06:52:01.488831   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-256345 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (24.701427119s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-256345 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (25.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-256345
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-256345-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-256345-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.352653ms)

                                                
                                                
-- stdout --
	* [multinode-256345-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-256345-m02' is duplicated with machine name 'multinode-256345-m02' in profile 'multinode-256345'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-256345-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-256345-m03 --driver=docker  --container-runtime=crio: (18.625340767s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-256345
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-256345: exit status 80 (281.875894ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-256345 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-256345-m03 already exists in multinode-256345-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-256345-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-256345-m03: (2.319127406s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.36s)

                                                
                                    
x
+
TestPreload (82.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-550461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1205 06:53:04.729836   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-550461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (47.571581994s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-550461 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-550461
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-550461: (6.193777095s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-550461 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-550461 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.848493234s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-550461 image list
helpers_test.go:175: Cleaning up "test-preload-550461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-550461
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-550461: (2.307012098s)
--- PASS: TestPreload (82.94s)

                                                
                                    
x
+
TestScheduledStopUnix (97.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-644744 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-644744 --memory=3072 --driver=docker  --container-runtime=crio: (21.302626989s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644744 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 06:54:28.984923  205518 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:54:28.985200  205518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:28.985213  205518 out.go:374] Setting ErrFile to fd 2...
	I1205 06:54:28.985219  205518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:28.985468  205518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:54:28.985737  205518 out.go:368] Setting JSON to false
	I1205 06:54:28.985855  205518 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:28.986257  205518 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:54:28.986335  205518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/config.json ...
	I1205 06:54:28.986500  205518 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:28.986592  205518 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-644744 -n scheduled-stop-644744
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 06:54:29.352630  205666 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:54:29.352844  205666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:29.352851  205666 out.go:374] Setting ErrFile to fd 2...
	I1205 06:54:29.352855  205666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:29.353034  205666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:54:29.353230  205666 out.go:368] Setting JSON to false
	I1205 06:54:29.353404  205666 daemonize_unix.go:73] killing process 205553 as it is an old scheduled stop
	I1205 06:54:29.353516  205666 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:29.353823  205666 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:54:29.353889  205666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/config.json ...
	I1205 06:54:29.354053  205666 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:29.354153  205666 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1205 06:54:29.357871   16314 retry.go:31] will retry after 132.235µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.358991   16314 retry.go:31] will retry after 100.682µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.360144   16314 retry.go:31] will retry after 274.313µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.361247   16314 retry.go:31] will retry after 174.225µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.362384   16314 retry.go:31] will retry after 311.817µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.363529   16314 retry.go:31] will retry after 1.09127ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.365729   16314 retry.go:31] will retry after 979.673µs: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.366867   16314 retry.go:31] will retry after 1.372402ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.369074   16314 retry.go:31] will retry after 1.362684ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.371259   16314 retry.go:31] will retry after 2.131008ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.374444   16314 retry.go:31] will retry after 7.999593ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.382645   16314 retry.go:31] will retry after 6.949112ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.389845   16314 retry.go:31] will retry after 10.408483ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.401034   16314 retry.go:31] will retry after 19.593997ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.421248   16314 retry.go:31] will retry after 41.553594ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
I1205 06:54:29.463480   16314 retry.go:31] will retry after 53.402129ms: open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644744 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644744 -n scheduled-stop-644744
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-644744
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644744 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 06:54:55.233133  206315 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:54:55.233397  206315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:55.233406  206315 out.go:374] Setting ErrFile to fd 2...
	I1205 06:54:55.233413  206315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:54:55.233637  206315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:54:55.233908  206315 out.go:368] Setting JSON to false
	I1205 06:54:55.234001  206315 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:55.234309  206315 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:54:55.234394  206315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/scheduled-stop-644744/config.json ...
	I1205 06:54:55.234597  206315 mustload.go:66] Loading cluster: scheduled-stop-644744
	I1205 06:54:55.234714  206315 config.go:182] Loaded profile config "scheduled-stop-644744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1205 06:55:06.331483   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-644744
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-644744: exit status 7 (76.818289ms)

                                                
                                                
-- stdout --
	scheduled-stop-644744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644744 -n scheduled-stop-644744
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644744 -n scheduled-stop-644744: exit status 7 (72.324189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-644744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-644744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-644744: (4.564882339s)
--- PASS: TestScheduledStopUnix (97.33s)

                                                
                                    
x
+
TestInsufficientStorage (8.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-238325 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-238325 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.169289747s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ea53e1d-57aa-466d-81cb-1c4880706194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-238325] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"01424ef0-1d89-4e1d-afc4-f1ca212ebf6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"1c1a511f-b7bf-43b1-ae33-ff0d48f475ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05061c91-53ae-4396-9bec-c0c5e5a016d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig"}}
	{"specversion":"1.0","id":"aa7c2385-a224-4919-bb01-5b4c4d3016de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube"}}
	{"specversion":"1.0","id":"bc60e622-c0e0-4622-a870-ba8cf3dbff41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"548e7bb5-4d5d-4e05-886a-59fa718b2ad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"024f346f-5c85-44d2-9877-239f3310306e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3ffce9e3-6b58-45d7-85fc-1f2fe1dd520b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1efb51db-2412-4d39-81e9-e76a98931d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f66365e4-fee7-47ed-a221-639b223e330e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"574a7071-3f4e-4c58-9846-82045ad1fb1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-238325\" primary control-plane node in \"insufficient-storage-238325\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0469fefc-7fc6-459e-a30c-b9ba54b46834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"853207f3-219e-4b8c-801e-55f268fa9160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb290be8-b0a3-40d0-b2b1-88062cf08b6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-238325 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-238325 --output=json --layout=cluster: exit status 7 (278.412784ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-238325","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-238325","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:55:51.392491  208855 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-238325" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-238325 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-238325 --output=json --layout=cluster: exit status 7 (276.283568ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-238325","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-238325","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:55:51.669277  208963 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-238325" does not appear in /home/jenkins/minikube-integration/21997-12758/kubeconfig
	E1205 06:55:51.679288  208963 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/insufficient-storage-238325/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-238325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-238325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-238325: (1.859019294s)
--- PASS: TestInsufficientStorage (8.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (291.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4022232729 start -p running-upgrade-290059 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4022232729 start -p running-upgrade-290059 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.90089676s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-290059 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 07:00:06.332062   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-290059 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.270142604s)
helpers_test.go:175: Cleaning up "running-upgrade-290059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-290059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-290059: (2.014669209s)
--- PASS: TestRunningBinaryUpgrade (291.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.842106538s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-040693
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-040693: (1.883253337s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-040693 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-040693 status --format={{.Host}}: exit status 7 (81.292171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m42.251945877s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-040693 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.644615ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-040693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-040693
	    minikube start -p kubernetes-upgrade-040693 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0406932 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-040693 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-040693 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.940165252s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-040693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-040693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-040693: (2.926242608s)
--- PASS: TestKubernetesUpgrade (315.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (73.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2473095923 start -p missing-upgrade-044081 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2473095923 start -p missing-upgrade-044081 --memory=3072 --driver=docker  --container-runtime=crio: (25.949775292s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-044081
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-044081: (1.747767213s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-044081
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-044081 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-044081 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.481684865s)
helpers_test.go:175: Cleaning up "missing-upgrade-044081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-044081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-044081: (2.361612181s)
--- PASS: TestMissingContainerUpgrade (73.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.783247ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-385989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestPause/serial/Start (46.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355053 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-355053 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.922415479s)
--- PASS: TestPause/serial/Start (46.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385989 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385989 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.884877923s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-385989 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4247102426 start -p stopped-upgrade-515128 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4247102426 start -p stopped-upgrade-515128 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.94351977s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4247102426 -p stopped-upgrade-515128 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4247102426 -p stopped-upgrade-515128 stop: (1.992836791s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-515128 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-515128 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.350633665s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1205 06:56:29.395231   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.987659266s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-385989 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-385989 status -o json: exit status 2 (333.971921ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-385989","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-385989
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-385989: (2.87738196s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (13.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355053 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-355053 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.976072133s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (13.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385989 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.30901829s)
--- PASS: TestNoKubernetes/serial/Start (4.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-12758/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-385989 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-385989 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.22933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1205 06:57:01.488296   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.81411355s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-385989
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-385989: (3.287438076s)
--- PASS: TestNoKubernetes/serial/Stop (3.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385989 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385989 --driver=docker  --container-runtime=crio: (6.785283117s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-385989 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-385989 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.250377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-397607 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-397607 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (194.719806ms)

                                                
                                                
-- stdout --
	* [false-397607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:57:20.408873  239124 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:57:20.409017  239124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:20.409028  239124 out.go:374] Setting ErrFile to fd 2...
	I1205 06:57:20.409036  239124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:20.409372  239124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12758/.minikube/bin
	I1205 06:57:20.410009  239124 out.go:368] Setting JSON to false
	I1205 06:57:20.411454  239124 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5984,"bootTime":1764911856,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:57:20.411526  239124 start.go:143] virtualization: kvm guest
	I1205 06:57:20.413722  239124 out.go:179] * [false-397607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:57:20.415195  239124 notify.go:221] Checking for updates...
	I1205 06:57:20.415230  239124 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:20.416657  239124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:20.417973  239124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12758/kubeconfig
	I1205 06:57:20.419201  239124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12758/.minikube
	I1205 06:57:20.420523  239124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:57:20.421807  239124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:20.423672  239124 config.go:182] Loaded profile config "kubernetes-upgrade-040693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 06:57:20.423807  239124 config.go:182] Loaded profile config "missing-upgrade-044081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:57:20.423930  239124 config.go:182] Loaded profile config "stopped-upgrade-515128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 06:57:20.424036  239124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:20.452702  239124 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1205 06:57:20.452814  239124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:20.525428  239124 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-05 06:57:20.514453322 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 06:57:20.525580  239124 docker.go:319] overlay module found
	I1205 06:57:20.527942  239124 out.go:179] * Using the docker driver based on user configuration
	I1205 06:57:20.529165  239124 start.go:309] selected driver: docker
	I1205 06:57:20.529181  239124 start.go:927] validating driver "docker" against <nil>
	I1205 06:57:20.529206  239124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:20.531182  239124 out.go:203] 
	W1205 06:57:20.532301  239124 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 06:57:20.533415  239124 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-397607 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-044081
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-515128
contexts:
- context:
cluster: missing-upgrade-044081
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-044081
name: missing-upgrade-044081
- context:
cluster: stopped-upgrade-515128
user: stopped-upgrade-515128
name: stopped-upgrade-515128
current-context: stopped-upgrade-515128
kind: Config
users:
- name: missing-upgrade-044081
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.key
- name: stopped-upgrade-515128
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-397607

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397607"

                                                
                                                
----------------------- debugLogs end: false-397607 [took: 3.174562816s] --------------------------------
helpers_test.go:175: Cleaning up "false-397607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-397607
--- PASS: TestNetworkPlugins/group/false (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.467078057s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-515128
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-515128: (1.00303229s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1205 07:01:44.558872   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:01.488274   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (38.040541821s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-397607 "pgrep -a kubelet"
I1205 07:02:02.870614   16314 config.go:182] Loaded profile config "auto-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fx4dl" [26b1ab23-0371-4ff6-9892-6e602c691213] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fx4dl" [26b1ab23-0371-4ff6-9892-6e602c691213] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003420463s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5jgnk" [13f24a2c-d32a-4c40-a0e3-2827cf6dfff0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004429051s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.523664043s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-397607 "pgrep -a kubelet"
I1205 07:02:18.797284   16314 config.go:182] Loaded profile config "kindnet-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-66l78" [0ac76de7-d131-4544-8439-db7bbd29956e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-66l78" [0ac76de7-d131-4544-8439-db7bbd29956e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00320468s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.398339103s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1205 07:03:04.730439   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-882265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.586629443s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-jsc5m" [b54ae538-b5c1-4b11-b8f8-3b5cbf7a2c9e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-jsc5m" [b54ae538-b5c1-4b11-b8f8-3b5cbf7a2c9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004564968s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-397607 "pgrep -a kubelet"
I1205 07:03:13.597928   16314 config.go:182] Loaded profile config "calico-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-397607 replace --force -f testdata/netcat-deployment.yaml
I1205 07:03:13.938437   16314 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1205 07:03:13.970739   16314 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gsc7s" [4d77b7c4-d66d-4cbf-94aa-4ce15268a6ac] Pending
helpers_test.go:352: "netcat-cd4db9dbf-gsc7s" [4d77b7c4-d66d-4cbf-94aa-4ce15268a6ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gsc7s" [4d77b7c4-d66d-4cbf-94aa-4ce15268a6ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003336503s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.539096493s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-397607 "pgrep -a kubelet"
I1205 07:03:30.382542   16314 config.go:182] Loaded profile config "custom-flannel-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6d2m4" [3b07ad6b-57ce-4c82-be93-52bcf6da0eaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6d2m4" [3b07ad6b-57ce-4c82-be93-52bcf6da0eaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004160825s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-397607 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.967784137s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.499610323s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-397607 "pgrep -a kubelet"
I1205 07:04:00.239868   16314 config.go:182] Loaded profile config "enable-default-cni-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8twbc" [960d1c67-7bfc-4261-8924-a8e79783c1bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8twbc" [960d1c67-7bfc-4261-8924-a8e79783c1bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003840803s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-sprrh" [32aba46c-d208-4a21-98d2-f4c0b7809ba6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003354642s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-397607 "pgrep -a kubelet"
I1205 07:04:11.140785   16314 config.go:182] Loaded profile config "flannel-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jg7f4" [5705e619-00b5-421d-8451-a6ced00cb35a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jg7f4" [5705e619-00b5-421d-8451-a6ced00cb35a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003558942s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-397607 "pgrep -a kubelet"
I1205 07:04:23.015670   16314 config.go:182] Loaded profile config "bridge-397607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-397607 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6mvtt" [352e26de-453c-479a-8787-0c72c1f0cc68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6mvtt" [352e26de-453c-479a-8787-0c72c1f0cc68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004347565s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (51.069985843s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-397607 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-397607 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (1m16.497616051s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-874709 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5446a9ce-ce83-4e1d-9425-c44cc40a4d5c] Pending
helpers_test.go:352: "busybox" [5446a9ce-ce83-4e1d-9425-c44cc40a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5446a9ce-ce83-4e1d-9425-c44cc40a4d5c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003857608s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-874709 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.26612294s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-874709 --alsologtostderr -v=3
E1205 07:05:06.331717   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/functional-959058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-874709 --alsologtostderr -v=3: (15.959207934s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709: exit status 7 (78.266505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-874709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-874709 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.406169462s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874709 -n old-k8s-version-874709
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-008839 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [77583b71-31d4-4d4c-8696-58ffa671159e] Pending
helpers_test.go:352: "busybox" [77583b71-31d4-4d4c-8696-58ffa671159e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [77583b71-31d4-4d4c-8696-58ffa671159e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003436279s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-008839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-008839 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-008839 --alsologtostderr -v=3: (16.612657438s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17b6c1ea-a6af-43b5-91c4-189bf0265bc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17b6c1ea-a6af-43b5-91c4-189bf0265bc6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004037892s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-172186 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-172186 --alsologtostderr -v=3: (16.414528537s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839: exit status 7 (76.16403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-008839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008839 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.641549131s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008839 -n no-preload-008839
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-770390 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a67b9028-baba-44af-9d25-db1f756f4ab3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a67b9028-baba-44af-9d25-db1f756f4ab3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004579934s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-770390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186: exit status 7 (77.859629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-172186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-172186 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.46748438s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-172186 -n default-k8s-diff-port-172186
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xn6nb" [fd771a55-07e2-4e40-8419-550f7c0bfe62] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003562435s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-770390 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-770390 --alsologtostderr -v=3: (18.210178165s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xn6nb" [fd771a55-07e2-4e40-8419-550f7c0bfe62] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003799712s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-874709 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-874709 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (29.980235963s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390: exit status 7 (88.151769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-770390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-770390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.638955836s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-770390 -n embed-certs-770390
E1205 07:07:12.472138   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:12.478574   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:12.489994   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:12.511502   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-cwnkq" [451d375c-9b1d-43b2-b096-76e6d8a568da] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00315666s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-cwnkq" [451d375c-9b1d-43b2-b096-76e6d8a568da] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003556664s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-008839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008839 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2clpl" [87d42add-da6e-4b7e-9e1e-f138da402fed] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004255535s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2clpl" [87d42add-da6e-4b7e-9e1e-f138da402fed] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003580232s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-172186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-624263 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-624263 --alsologtostderr -v=3: (2.641091136s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263: exit status 7 (77.105708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-624263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1205 07:07:01.487806   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/addons-177895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-624263 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.383001539s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-624263 -n newest-cni-624263
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-172186 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-624263 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2kzfd" [7fc53b6c-2249-43c2-9989-72cc5652b20b] Running
E1205 07:07:12.553666   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:12.635372   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:12.796888   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:13.119005   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:13.337977   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:07:13.761227   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kindnet-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00372192s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2kzfd" [7fc53b6c-2249-43c2-9989-72cc5652b20b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002721292s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-770390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1205 07:07:23.579373   16314 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/auto-397607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-770390 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.14
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
380 TestNetworkPlugins/group/kubenet 3.78
388 TestNetworkPlugins/group/cilium 3.83
394 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1205 06:04:55.625417   16314 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1205 06:04:55.679366   16314 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1205 06:04:55.761767   16314 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-397607 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-044081
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-515128
contexts:
- context:
cluster: missing-upgrade-044081
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-044081
name: missing-upgrade-044081
- context:
cluster: stopped-upgrade-515128
user: stopped-upgrade-515128
name: stopped-upgrade-515128
current-context: stopped-upgrade-515128
kind: Config
users:
- name: missing-upgrade-044081
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.key
- name: stopped-upgrade-515128
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-397607

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397607"

                                                
                                                
----------------------- debugLogs end: kubenet-397607 [took: 3.588168293s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-397607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-397607
--- SKIP: TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-397607 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-397607" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-040693
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-044081
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12758/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-515128
contexts:
- context:
cluster: kubernetes-upgrade-040693
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-040693
name: kubernetes-upgrade-040693
- context:
cluster: missing-upgrade-044081
extensions:
- extension:
last-update: Fri, 05 Dec 2025 06:57:12 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-044081
name: missing-upgrade-044081
- context:
cluster: stopped-upgrade-515128
user: stopped-upgrade-515128
name: stopped-upgrade-515128
current-context: kubernetes-upgrade-040693
kind: Config
users:
- name: kubernetes-upgrade-040693
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kubernetes-upgrade-040693/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/kubernetes-upgrade-040693/client.key
- name: missing-upgrade-044081
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/missing-upgrade-044081/client.key
- name: stopped-upgrade-515128
user:
client-certificate: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.crt
client-key: /home/jenkins/minikube-integration/21997-12758/.minikube/profiles/stopped-upgrade-515128/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-397607

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-397607" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397607"

                                                
                                                
----------------------- debugLogs end: cilium-397607 [took: 3.653912669s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-397607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-397607
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-245906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-245906
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard